This article needs additional citations for verification .(May 2024) |
In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.
The theorem says that if we have a function satisfying certain conditions, and we use the convention for the Fourier transform that
then
In other words, the theorem says that
This last equation is called the Fourier integral theorem.
Another way to state the theorem is that if is the flip operator i.e. , then
The theorem holds if both and its Fourier transform are absolutely integrable (in the Lebesgue sense) and is continuous at the point . However, even under more general conditions versions of the Fourier inversion theorem hold. In these cases the integrals above may not converge in an ordinary sense.
In this section we assume that is an integrable continuous function. Use the convention for the Fourier transform that
Furthermore, we assume that the Fourier transform is also integrable.
The most common statement of the Fourier inversion theorem is to state the inverse transform as an integral. For any integrable function and all set
Then for all we have
Proof |
---|
Given and , the proof uses the following facts:
Since, by assumption, , it follows by the dominated convergence theorem that Define Applying facts 1, 2 and 4, repeatedly for multiple integrals if necessary, we obtain Using fact 3 on and , for each , we have the convolution of with an approximate identity. But since , fact 5 says that Putting together the above we have shown that |
The theorem can be restated as
By taking the real part [1] of each side of the above we obtain
For any function define the flip operator [2] by
Then we may instead define
It is immediate from the definition of the Fourier transform and the flip operator that both and match the integral definition of , and in particular are equal to each other and satisfy .
Since we have and
The form of the Fourier inversion theorem stated above, as is common, is that
In other words, is a left inverse for the Fourier transform. However it is also a right inverse for the Fourier transform i.e.
Since is so similar to , this follows very easily from the Fourier inversion theorem (changing variables ):
Alternatively, this can be seen from the relation between and the flip operator and the associativity of function composition, since
When used in physics and engineering, the Fourier inversion theorem is often used under the assumption that everything "behaves nicely". In mathematics such heuristic arguments are not permitted, and the Fourier inversion theorem includes an explicit specification of what class of functions is being allowed. However, there is no "best" class of functions to consider so several variants of the Fourier inversion theorem exist, albeit with compatible conclusions.
The Fourier inversion theorem holds for all Schwartz functions (roughly speaking, smooth functions that decay quickly and whose derivatives all decay quickly). This condition has the benefit that it is an elementary direct statement about the function (as opposed to imposing a condition on its Fourier transform), and the integral that defines the Fourier transform and its inverse are absolutely integrable. This version of the theorem is used in the proof of the Fourier inversion theorem for tempered distributions (see below).
The Fourier inversion theorem holds for all continuous functions that are absolutely integrable (i.e. ) with absolutely integrable Fourier transform. This includes all Schwartz functions, so is a strictly stronger form of the theorem than the previous one mentioned. This condition is the one used above in the statement section.
A slight variant is to drop the condition that the function be continuous but still require that it and its Fourier transform be absolutely integrable. Then almost everywhere where g is a continuous function, and for every .
If the function is absolutely integrable in one dimension (i.e. ) and is piecewise smooth then a version of the Fourier inversion theorem holds. In this case we define
Then for all
i.e. equals the average of the left and right limits of at . At points where is continuous this simply equals .
A higher-dimensional analogue of this form of the theorem also holds, but according to Folland (1992) is "rather delicate and not terribly useful".
If the function is absolutely integrable in one dimension (i.e. ) but merely piecewise continuous then a version of the Fourier inversion theorem still holds. In this case the integral in the inverse Fourier transform is defined with the aid of a smooth rather than a sharp cut off function; specifically we define
The conclusion of the theorem is then the same as for the piecewise smooth case discussed above.
If is continuous and absolutely integrable on then the Fourier inversion theorem still holds so long as we again define the inverse transform with a smooth cut off function i.e.
The conclusion is now simply that for all
If we drop all assumptions about the (piecewise) continuity of and assume merely that it is absolutely integrable, then a version of the theorem still holds. The inverse transform is again defined with the smooth cut off, but with the conclusion that
for almost every
In this case the Fourier transform cannot be defined directly as an integral since it may not be absolutely convergent, so it is instead defined by a density argument (see the Fourier transform article). For example, putting
we can set where the limit is taken in the -norm. The inverse transform may be defined by density in the same way or by defining it in terms of the Fourier transform and the flip operator. We then have
in the mean squared norm. In one dimension (and one dimension only), it can also be shown that it converges for almost every x∈ℝ- this is Carleson's theorem, but is much harder to prove than convergence in the mean squared norm.
The Fourier transform may be defined on the space of tempered distributions by duality of the Fourier transform on the space of Schwartz functions. Specifically for and for all test functions we set
where is defined using the integral formula. [3] If then this agrees with the usual definition. We may define the inverse transform , either by duality from the inverse transform on Schwartz functions in the same way, or by defining it in terms of the flip operator (where the flip operator is defined by duality). We then have
The Fourier inversion theorem is analogous to the convergence of Fourier series. In the Fourier transform case we have
In the Fourier series case we instead have
In particular, in one dimension and the sum runs from to .
In applications of the Fourier transform the Fourier inversion theorem often plays a critical role. In many situations the basic strategy is to apply the Fourier transform, perform some operation or simplification, and then apply the inverse Fourier transform.
More abstractly, the Fourier inversion theorem is a statement about the Fourier transform as an operator (see Fourier transform on function spaces). For example, the Fourier inversion theorem on shows that the Fourier transform is a unitary operator on .
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.
In mathematics, the Plancherel theorem is a result in harmonic analysis, proven by Michel Plancherel in 1910. It is a generalization of Parseval's theorem; often used in the fields of science and engineering, proving the unitarity of the Fourier transform.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two forms, normalized and unnormalized.
In mathematics, a Paley–Wiener theorem is a theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.
In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.
In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis.
In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make many rigorous arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.
In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).
In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on L2(X). In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock.
In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as