In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. [1] Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering. [2]
Because of the various mathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operator theory, linear algebra, and matrix theory. [3]
The Fourier transform has been used for over a century as a way of decomposing and expanding signals. However, the Fourier transform masks key information regarding the moment of emission and the duration of a signal. In 1946, Dennis Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, and created quantization while encapsulating important signal characteristics. [1] This discovery marked the first concerted effort towards frame theory.
The frame condition was first described by Richard Duffin and Albert Charles Schaeffer in a 1952 article on nonharmonic Fourier series as a way of computing the coefficients in a linear combination of the vectors of a linearly dependent spanning set (in their terminology, a "Hilbert space frame"). [4] In the 1980s, Stéphane Mallat, Ingrid Daubechies, and Yves Meyer used frames to analyze wavelets. Today frames are associated with wavelets, signal and image processing, and data compression.
Suppose we have a vector space over a field and we want to express an arbitrary element as a linear combination of the vectors , that is, finding coefficients such that
If the set does not span , then such coefficients do not exist for every such . If spans and also is linearly independent, this set forms a basis of , and the coefficients are uniquely determined by . If, however, spans but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if is of infinite dimension.
Given that spans and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:
Let be an inner product space and be a set of vectors in . The set is a frame of if it satisfies the so called frame condition. That is, if there exist two constants such that [5]
A frame is called overcomplete (or redundant) if it is not a Riesz basis for the vector space. The redundancy of the frame is measured by the lower and upper frame bounds (or redundancy factors) and , respectively. [6] That is, a frame of normalized vectors in an -dimensional space has frame bounds which satisfiy
If the frame is a Riesz basis and is therefore linearly independent, then .
The frame bounds are not unique because numbers less than and greater than are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds.
If the frame condition is satisfied, then the linear operator defined as [7]
mapping to the sequence of frame coefficients, is called the analysis operator. Using this definition, the frame condition can be rewritten as
The adjoint of the analysis operator is called the synthesis operator of the frame and defined as [8]
The composition of the analysis operator and the synthesis operator leads to the frame operator defined as
From this definition and linearity in the first argument of the inner product, the frame condition now yields
If the analysis operator exists, then so does the frame operator as well as the inverse . Both and are positive definite, bounded self-adjoint operators, resulting in and being the infimum and supremum values of the spectrum of . [9] In finite dimensions, the frame operator is automatically trace-class, with and corresponding to the smallest and largest eigenvalues of or, equivalently, the smallest and largest singular values of . [10]
The frame condition is a generalization of Parseval's identity that maintains norm equivalence between a signal in and its sequence of coefficients in .
If the set is a frame of , it spans . Otherwise there would exist at least one non-zero which would be orthogonal to all such that
either violating the frame condition or the assumption that .
However, a spanning set of is not necessarily a frame. For example, consider with the dot product, and the infinite set given by
This set spans but since
we cannot choose a finite upper frame bound B. Consequently, the set is not a frame.
Let be a frame; satisfying the frame condition. Then the dual operator is defined as
with
called the dual frame (or conjugate frame). It is the canonical dual of (similar to a dual basis of a basis), with the property that [11]
and subsequent frame condition
Canonical duality is a reciprocity relation, i.e. if the frame is the canonical dual of then the frame is the canonical dual of To see that this makes sense, let be an element of and let
Thus
proving that
Alternatively, let
Applying the properties of and its inverse then shows that
and therefore
An overcomplete frame allows us some freedom for the choice of coefficients such that . That is, there exist dual frames of for which
Suppose is a subspace of a Hilbert space and let and be a frame and dual frame of , respectively. If does not depend on , the dual frame is computed as
where denotes the restriction of to such that is invertible on . The best linear approximation of in is then given by the orthogonal projection of onto , defined as
The dual frame synthesis operator is defined as
and the orthogonal projection is computed from the frame coefficients . In dual analysis, the orthogonal projection is computed from as
with dual frame analysis operator . [12]
In signal processing, it is common to represent signals as vectors in a Hilbert space. In this interpretation, a vector expressed as a linear combination of the frame vectors is a redundant signal. Representing a signal strictly with a set of linearly independent vectors may not always be the most compact form. [13] Using a frame, it is possible to create a simpler, more sparse representation of a signal as compared with a family of elementary signals. Frames, therefore, provide "robustness". Because they provide a way of producing the same vector within a space, signals can be encoded in various ways. This facilitates fault tolerance and resilience to a loss of signal. Finally, redundancy can be used to mitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals.
From Harmonic analysis it is known that the complex trigonometric system form an orthonormal basis for . As such, is a (tight) frame for with bounds . [14]
The system remains stable under "sufficiently small" pertubations and the frame will form a Riesz basis for . Accordingly, every function in will have a unique non-harmonic Fourier series representation
with and is called the Fourier frame (or frame of exponentials). What constitutes "sufficiently small" is described by the following theorem, named after Mikhail Kadets. [15]
Kadec's 1⁄4-theorem — Let be a sequence of real numbers such that
then satisfies the Paley-Wiener criterion and thus forms a Riesz basis for .
The theorem can be easily extended to frames, replacing the integers by another sequence of real numbers such that [16] [17]
then is a frame for with bounds
Redundancy of a frame is useful in mitigating added noise from the frame coefficients. Let denote a vector computed with noisy frame coefficients. The noise is then mitigated by projecting onto the image of .
Theorem — Let be a frame of a Hilbert space of subspace thereof. The orthogonal projection is
The coefficients are frame coefficients in if and only if
The sequence space and (as ) are reproducing kernel Hilbert spaces with a kernel given by the matrix . [9] As such, the above equation is also referred to as the reproducing kernel equation and expresses the redundancy of frame coefficients. [18]
A frame is a tight frame if . A tight frame with frame bound has the property that
For example, the union of disjoint orthonormal bases of a vector space is an overcomplete tight frame with . A tight frame is a Parseval frame if . [19] Each orthonormal basis is a (complete) Parseval frame, but the converse is not necessarily true. [20]
A frame is an equal norm frame if there is a constant such that for each . An equal norm frame is a normalized frame (sometimes called a unit-norm frame) if . [21] A unit-norm Parseval frame is an orthonormal basis; such a frame satisfies Parseval's identity.
A frame is an equiangular frame if there is a constant such that for all . In particular, every orthonormal basis is equiangular. [22]
A frame is an exact frame if no proper subset of the frame spans the inner product space. Each basis for an inner product space is an exact frame for the space (so a basis is a special case of a frame).
Sometimes it may not be possible to satisfy both frame bounds simultaneously. An upper (respectively lower) semi-frame is a set that only satisfies the upper (respectively lower) frame inequality. [9] The Bessel Sequence is an example of a set of vectors that satisfies only the upper frame inequality.
For any vector to be reconstructed from the coefficients it suffices if there exists a constant such that
By setting and applying the linearity of the analysis operator, this condition is equivalent to:
which is exactly the lower frame bound condition.
A fusion frame is best understood as an extension of the dual frame synthesis and analysis operators where, instead of a single subspace , a set of closed subspaces with positive scalar weights is considered. A fusion frame is a family that satisfies the frame condition
where denotes the orthogonal projection onto the subspace . [23]
Suppose is a Hilbert space, a locally compact space, and is a locally finite Borel measure on . Then a set of vectors in , with a measure is said to be a continuous frame if there exists constants, such that
To see that continuous frames are indeed the natural generalization of the frames mentioned above, consider a discrete set and a measure where is the Dirac measure. Then the continuous frame condition reduces to
Just like in the discrete case we can define the analysis, synthesis, and frame operators when dealing with continuous frames.
Given a continuous frame the continuous analysis operator is the operator mapping to a function on defined as follows:
by .
The adjoint operator of the continuous analysis operator is the continuous synthesis operator, which is the map
The composition of the continuous analysis operator and the continuous synthesis operator is known as the continuous frame operator. For a continuous frame , it is defined as follows:
In this case, the continuous frame projector is the orthogonal projection defined by
The projector is an integral operator with reproducting kernel , thus is a reproducing kernel Hilbert space. [9]
Given a continuous frame , and another continuous frame , then is said to be a continuous dual frame of if it satisfies the following condition for all :
Just as a frame is a natural generalization of a basis to sets that may be linear dependent, a positive operator-valued measure (POVM) is a natural generalization of a projection-valued measure (PVM) in that elements of a POVM are not necessarily orthogonal projections.
Suppose is a measurable space with a Borel σ-algebra on and let be a POVM from to the space of positive operators on with the additional property that
where is the identity operator. Then is called a framed POVM. [23]
In case of the fusion frame condition, this allows for the substitution
For the continuous frame operator, the framed POVM would be [24]
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.
In mathematics, a linear form is a linear map from a vector space to its field of scalars.
The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".
In mathematics, a function between two complex vector spaces is said to be antilinear or conjugate-linear if
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function s → s2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
In mathematics, a holomorphic vector bundle is a complex vector bundle over a complex manifold X such that the total space E is a complex manifold and the projection map π : E → X is holomorphic. Fundamental examples are the holomorphic tangent bundle of a complex manifold, and its dual, the holomorphic cotangent bundle. A holomorphic line bundle is a rank one holomorphic vector bundle.
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as
A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.
In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
This is a glossary for the terminology in a mathematical field of functional analysis.