Parseval's identity

Last updated

In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. The identity asserts the equality of the energy of a periodic signal (given as the integral of the squared amplitude of the signal) and the energy of its frequency domain representation (given as the sum of squares of the amplitudes). Geometrically, it is a generalized Pythagorean theorem for inner-product spaces (which can have an uncountable infinity of basis vectors).

Contents

The identity asserts that the sum of squares of the Fourier coefficients of a function is equal to the integral of the square of the function, where the Fourier coefficients of are given by

The result holds as stated, provided is a square-integrable function or, more generally, in Lp space A similar result is the Plancherel theorem, which asserts that the integral of the square of the Fourier transform of a function is equal to the integral of the square of the function itself. In one-dimension, for

Generalization of the Pythagorean theorem

The identity is related to the Pythagorean theorem in the more general setting of a separable Hilbert space as follows. Suppose that is a Hilbert space with inner product Let be an orthonormal basis of ; i.e., the linear span of the is dense in and the are mutually orthonormal:

Then Parseval's identity asserts that for every

This is directly analogous to the Pythagorean theorem, which asserts that the sum of the squares of the components of a vector in an orthonormal basis is equal to the squared length of the vector. One can recover the Fourier series version of Parseval's identity by letting be the Hilbert space and setting for

More generally, Parseval's identity holds for arbitrary Hilbert spaces, not necessarily separable. When the Hilbert space is not separable any orthonormal basis is uncountable and we need to generalize the concept of a series to an unconditional sum as follows: let an orthonormal basis of a Hilbert space (where have arbitrary cardinality), then we say that converges unconditionally if for every there exists a finite subset such that for any pair of finite subsets that contains (that is, such that ). Note that in this case we are using a net to define the unconditional sum.

See also

Related Research Articles

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Fourier series§Definition.

<span class="mw-page-title-main">Residue theorem</span> Concept of complex analysis

In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. The residue theorem should not be confused with special cases of the generalized Stokes' theorem; however, the latter can be used as an ingredient of its proof.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.

A generalized Fourier series is the expansion of a square integrable function into a sum of square integrable orthogonal basis functions. The standard Fourier series uses an orthonormal basis of trigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to any square integrable function.

In mathematics, the Peter–Weyl theorem is a basic result in the theory of harmonic analysis, applying to topological groups that are compact, but are not necessarily abelian. It was initially proved by Hermann Weyl, with his student Fritz Peter, in the setting of a compact topological group G. The theorem is a collection of results generalizing the significant facts about the decomposition of the regular representation of any finite group, as discovered by Ferdinand Georg Frobenius and Issai Schur.

In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis. They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.

In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

In mathematics, the Plancherel theorem is a result in harmonic analysis, proven by Michel Plancherel in 1910. It is a generalization of Parseval's theorem; often used in the fields of science and engineering, proving the unitarity of the Fourier transform.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

<span class="mw-page-title-main">Sinc function</span> Special mathematical function defined as sin(x)/x

In mathematics, physics and engineering, the sinc function ( SINK), denoted by sinc(x), has two forms, normalized and unnormalized.

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Specifically, a Hilbert space of functions from a set is an RKHS if, for each , there exists a function such that for all ,

In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. This theorem was first developed at the University of Alberta. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space


In mathematics, a Hilbert space generalizes the notion of Euclidean space. It extends the methods of linear algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is a vector space equipped with an inner product operation, which allows lengths and angles to be defined. Furthermore, Hilbert spaces are complete, which means that there are enough limits in the space to allow the techniques of calculus to be used. A Hilbert space is a special case of a Banach space.

In mathematics, the Weil–Brezin map, named after André Weil and Jonathan Brezin, is a unitary transformation that maps a Schwartz function on the real line to a smooth function on the Heisenberg manifold. The Weil–Brezin map gives a geometric interpretation of the Fourier transform, the Plancherel theorem and the Poisson summation formula. The image of Gaussian functions under the Weil–Brezin map are nil-theta functions, which are related to theta functions. The Weil–Brezin map is sometimes referred to as the Zak transform, which is widely applied in the field of physics and signal processing; however, the Weil–Brezin Map is defined via Heisenberg group geometrically, whereas there is no direct geometric or group theoretic interpretation from the Zak transform.

This is a glossary for the terminology in a mathematical field of functional analysis.

In mathematics, Cauchy wavelets are a family of continuous wavelets, used in the continuous wavelet transform.

References