In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
Consider two functions and with Fourier transforms and :
where denotes the Fourier transform operator . The transform may be normalized in other ways, in which case constant scaling factors (typically or ) will appear in the convolution theorem below. The convolution of and is defined by:
In this context the asterisk denotes convolution, instead of standard multiplication. The tensor product symbol is sometimes used instead.
The convolution theorem states that: [1] [2] : eq.8
| (Eq.1a) |
Applying the inverse Fourier transform produces the corollary: [2] : eqs.7, 10
| (Eq.1b) |
The theorem also generally applies to multi-dimensional functions.
Multi-dimensional derivation of Eq.1 |
---|
Consider functions in Lp-space with Fourier transforms : where indicates the inner product of : and The convolution of and is defined by: Also: Hence by Fubini's theorem we have that so its Fourier transform is defined by the integral formula: Note that Hence by the argument above we may apply Fubini's theorem again (i.e. interchange the order of integration): |
This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups.
Consider -periodic functions and which can be expressed as periodic summations:
In practice the non-zero portion of components and are often limited to duration but nothing in the theorem requires that.
The Fourier series coefficients are:
where denotes the Fourier series integral.
is also -periodic, and is called a periodic convolution .
Derivation of periodic convolution |
---|
The corresponding convolution theorem is:
| (Eq.2) |
Derivation of Eq.2 |
---|
By a derivation similar to Eq.1, there is an analogous theorem for sequences, such as samples of two continuous functions, where now denotes the discrete-time Fourier transform (DTFT) operator. Consider two sequences and with transforms and :
The § Discrete convolution of and is defined by:
The convolution theorem for discrete sequences is: [3] [4] : p.60 (2.169)
| (Eq.3) |
and as defined above, are periodic, with a period of 1. Consider -periodic sequences and :
These functions occur as the result of sampling and at intervals of and performing an inverse discrete Fourier transform (DFT) on samples (see § Sampling the DTFT). The discrete convolution:
is also -periodic, and is called a periodic convolution . Redefining the operator as the -length DFT, the corresponding theorem is: [5] [4] : p. 548
| (Eq.4a) |
And therefore:
| (Eq.4b) |
Under the right conditions, it is possible for this -length sequence to contain a distortion-free segment of a convolution. But when the non-zero portion of the or sequence is equal or longer than some distortion is inevitable. Such is the case when the sequence is obtained by directly sampling the DTFT of the infinitely long § Discrete Hilbert transform impulse response. [upper-alpha 1]
For and sequences whose non-zero duration is less than or equal to a final simplification is:
| (Eq.4c) |
This form is often used to efficiently implement numerical convolution by computer. (see § Fast convolution algorithms and § Example)
As a partial reciprocal, it has been shown [6] that any linear transform that turns convolution into a product is the DFT (up to a permutation of coefficients).
Derivations of Eq.4 |
---|
A time-domain derivation proceeds as follows: A frequency-domain derivation follows from § Periodic data, which indicates that the DTFTs can be written as: The product with is thereby reduced to a discrete-frequency function: where the equivalence of and follows from § Sampling the DTFT. Therefore, the equivalence of (5a) and (5b) requires:
|
There is also a convolution theorem for the inverse Fourier transform:
Here, "" represents the Hadamard product, and "" represents a convolution between the two matrices.
so that
The convolution theorem extends to tempered distributions. Here, is an arbitrary tempered distribution:
But must be "rapidly decreasing" towards and in order to guarantee the existence of both, convolution and multiplication product. Equivalently, if is a smooth "slowly growing" ordinary function, it guarantees the existence of both, multiplication and convolution product. [7] [8] [9]
In particular, every compactly supported tempered distribution, such as the Dirac delta, is "rapidly decreasing". Equivalently, bandlimited functions, such as the function that is constantly are smooth "slowly growing" ordinary functions. If, for example, is the Dirac comb both equations yield the Poisson summation formula and if, furthermore, is the Dirac delta then is constantly one and these equations yield the Dirac comb identity.
In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.
In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.
In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.
In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods. It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.
In mathematics, a Dirac comb is a periodic function with the formula for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb, hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period. Periodic convolution arises, for example, in the context of the discrete-time Fourier transform (DTFT). In particular, the DTFT of the product of two discrete sequences is the periodic convolution of the DTFTs of the individual sequences. And each DTFT is a periodic summation of a continuous Fourier transform function. Although DTFTs are usually continuous functions of frequency, the concepts of periodic and circular convolution are also directly applicable to discrete sequences of data. In that context, circular convolution plays an important role in maximizing the efficiency of a certain kind of common filtering operation.
In digital signal processing, a discrete Fourier series (DFS) is a Fourier series whose sinusoidal components are functions of discrete time instead of continuous time. A specific example is the inverse discrete Fourier transform.
In mathematics, any integrable function can be made into a periodic function with period P by summing the translations of the function by integer multiples of P. This is called periodic summation:
For a visual representation of the use of the convolution theorem in signal processing, see: