# Convolution theorem

Last updated

In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two signals is the pointwise product of their Fourier transforms. In other words, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Versions of the convolution theorem are true for various Fourier-related transforms. Let ${\displaystyle f}$ and ${\displaystyle g}$ be two functions with convolution ${\displaystyle f*g}$. (Note that the asterisk denotes convolution in this context, not standard multiplication. The tensor product symbol ${\displaystyle \otimes }$ is sometimes used instead.)

Mathematics includes the study of such topics as quantity, structure (algebra), space (geometry), and change. It has no generally accepted definition.

The Fourier transform (FT) decomposes a function of time into its constituent frequencies. This is similar to the way a musical chord can be expressed in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform of a function of time is itself a complex-valued function of frequency, whose magnitude (modulus) represents the amount of that frequency present in the original function, and whose argument is the phase offset of the basic sinusoid in that frequency. The Fourier transform is not limited to functions of time, but the domain of the original function is commonly referred to as the time domain. There is also an inverse Fourier transform that mathematically synthesizes the original function from its frequency domain representation.

In mathematics convolution is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reversed and shifted.

## Contents

If ${\displaystyle {\mathcal {F}}}$ denotes the Fourier transform operator, then ${\displaystyle {\mathcal {F}}\{f\}}$ and ${\displaystyle {\mathcal {F}}\{g\}}$ are the Fourier transforms of ${\displaystyle f}$ and ${\displaystyle g}$, respectively. Then

In mathematics, an operator is generally a mapping that acts on elements of a space to produce elements of another space. The most common operators are linear maps, which act on vector spaces. However, when using "linear operator" instead of "linear map", mathematicians often mean actions on vector spaces of functions, which also preserve other properties, such as continuity. For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators, integral operators or integro-differential operators.

${\displaystyle {\mathcal {F}}\{f*g\}={\mathcal {F}}\{f\}\cdot {\mathcal {F}}\{g\}}$

where ${\displaystyle \cdot }$ denotes point-wise multiplication. It also works the other way around:

${\displaystyle {\mathcal {F}}\{f\cdot g\}={\mathcal {F}}\{f\}*{\mathcal {F}}\{g\}}$

By applying the inverse Fourier transform ${\displaystyle {\mathcal {F}}^{-1}}$, we can write:

${\displaystyle f*g={\mathcal {F}}^{-1}{\big \{}{\mathcal {F}}\{f\}\cdot {\mathcal {F}}\{g\}{\big \}}}$

and:

${\displaystyle f\cdot g={\mathcal {F}}^{-1}{\big \{}{\mathcal {F}}\{f\}*{\mathcal {F}}\{g\}{\big \}}}$

The relationships above are only valid for the form of the Fourier transform shown in the Proof section below. The transform may be normalized in other ways, in which case constant scaling factors (typically ${\displaystyle 2\pi }$ or ${\displaystyle {\sqrt {2\pi }}}$) will appear in the relationships above.

This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups.

In mathematics, the Laplace transform is an integral transform named after its inventor Pierre-Simon Laplace. It transforms a function of a real variable t to a function of a complex variable s. The transform has many applications in science and engineering.

In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, and the ordinary or one-sided Laplace transform. If ƒ(t) is a real or complex valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral

In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

This formulation is especially useful for implementing a numerical convolution on a computer: The standard convolution algorithm has quadratic computational complexity. With the help of the convolution theorem and the fast Fourier transform, the complexity of the convolution can be reduced from ${\displaystyle O\left(n^{2}\right)}$ to ${\displaystyle O\left(n\log n\right)}$, using big O notation. This can be exploited to construct fast multiplication algorithms, as in Multiplication algorithm § Fourier transform methods.

A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

In algebra, a quadratic function, a quadratic polynomial, a polynomial of degree 2, or simply a quadratic, is a polynomial function with one or more variables in which the highest-degree term is of the second degree. For example, a quadratic function in three variables x, y, and z contains exclusively terms x2, y2, z2, xy, xz, yz, x, y, z, and a constant:

In computer science, the analysis of algorithms is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes or the number of storage locations it uses. An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.

## Proof

The proof here is shown for a particular normalization of the Fourier transform. As mentioned above, if the transform is normalized differently, then constant scaling factors will appear in the derivation.

In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.

A scale factor is a number which scales, or multiplies, some quantity. In the equation y = Cx, C is the scale factor for x. C is also the coefficient of x, and may be called the constant of proportionality of y to x. For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage.

Let ${\displaystyle f,g}$ belong to the Lp-space ${\displaystyle L^{1}(\mathbb {R} ^{n})}$. Let ${\displaystyle F}$ be the Fourier transform of ${\displaystyle f}$ and ${\displaystyle G}$ be the Fourier transform of ${\displaystyle g}$:

{\displaystyle {\begin{aligned}F(\nu )&={\mathcal {F}}\{f\}(\nu )=\int _{\mathbb {R} ^{n}}f(x)e^{-2\pi ix\cdot \nu }\,dx,\\G(\nu )&={\mathcal {F}}\{g\}(\nu )=\int _{\mathbb {R} ^{n}}g(x)e^{-2\pi ix\cdot \nu }\,dx,\end{aligned}}}

where the dot between ${\displaystyle x}$ and ${\displaystyle \nu }$ indicates the inner product of ${\displaystyle \mathbb {R} ^{n}}$. Let ${\displaystyle h}$ be the convolution of ${\displaystyle f}$ and ${\displaystyle g}$

${\displaystyle h(z)=\int _{\mathbb {R} ^{n}}f(x)g(z-x)\,dx.}$

Also

${\displaystyle \iint |f(x)g(z-x)|\,dz\,dx=\int \left(|f(x)|\int |g(z-x)|\,dz\right)\,dx=\int |f(x)|\,\|g\|_{1}\,dx=\|f\|_{1}\|g\|_{1}.}$

Hence by Fubini's theorem we have that ${\displaystyle h\in L^{1}(\mathbb {R} ^{n})}$ so its Fourier transform ${\displaystyle H}$ is defined by the integral formula

{\displaystyle {\begin{aligned}H(\nu )={\mathcal {F}}\{h\}&=\int _{\mathbb {R} ^{n}}h(z)e^{-2\pi iz\cdot \nu }\,dz\\&=\int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}f(x)g(z-x)\,dx\,e^{-2\pi iz\cdot \nu }\,dz.\end{aligned}}}

Note that ${\displaystyle |f(x)g(z-x)e^{-2\pi iz\cdot \nu }|=|f(x)g(z-x)|}$ and hence by the argument above we may apply Fubini's theorem again (i.e. interchange the order of integration):

${\displaystyle H(\nu )=\int _{\mathbb {R} ^{n}}f(x)\left(\int _{\mathbb {R} ^{n}}g(z-x)e^{-2\pi iz\cdot \nu }\,dz\right)\,dx.}$

Substituting ${\displaystyle y=z-x}$ yields ${\displaystyle dy=dz}$. Therefore

${\displaystyle H(\nu )=\int _{\mathbb {R} ^{n}}f(x)\left(\int _{\mathbb {R} ^{n}}g(y)e^{-2\pi i(y+x)\cdot \nu }\,dy\right)\,dx}$
${\displaystyle =\int _{\mathbb {R} ^{n}}f(x)e^{-2\pi ix\cdot \nu }\left(\int _{\mathbb {R} ^{n}}g(y)e^{-2\pi iy\cdot \nu }\,dy\right)\,dx}$
${\displaystyle =\int _{\mathbb {R} ^{n}}f(x)e^{-2\pi ix\cdot \nu }\,dx\int _{\mathbb {R} ^{n}}g(y)e^{-2\pi iy\cdot \nu }\,dy.}$

These two integrals are the definitions of ${\displaystyle F(\nu )}$ and ${\displaystyle G(\nu )}$, so:

${\displaystyle H(\nu )=F(\nu )\cdot G(\nu ),}$

QED.

## Convolution theorem for inverse Fourier transform

A similar argument, as the above proof, can be applied to the convolution theorem for the inverse Fourier transform;

${\displaystyle {\mathcal {F}}^{-1}\{f*g\}={\mathcal {F}}^{-1}\{f\}\cdot {\mathcal {F}}^{-1}\{g\}}$
${\displaystyle {\mathcal {F}}^{-1}\{f\cdot g\}={\mathcal {F}}^{-1}\{f\}*{\mathcal {F}}^{-1}\{g\}}$
${\displaystyle f*g={\mathcal {F}}{\big \{}{\mathcal {F}}^{-1}\{f\}\cdot {\mathcal {F}}^{-1}\{g\}{\big \}}}$

and:

${\displaystyle f\cdot g={\mathcal {F}}^{-1}{\big \{}{\mathcal {F}}\{f\}*{\mathcal {F}}\{g\}{\big \}}}$

## Functions of discrete variable sequences

By similar arguments, it can be shown that the discrete convolution of sequences ${\displaystyle x}$ and ${\displaystyle y}$ is given by:

${\displaystyle x*y=\scriptstyle {\rm {DTFT}}^{-1}\displaystyle {\big [}\scriptstyle {\rm {DTFT}}\displaystyle \{x\}\cdot \ \scriptstyle {\rm {DTFT}}\displaystyle \{y\}{\big ]},}$

where DTFT represents the discrete-time Fourier transform.

An important special case is the circular convolution of ${\displaystyle x}$ and ${\displaystyle y}$ defined by ${\displaystyle x_{N}*y,}$ where ${\displaystyle x_{N}}$ is a periodic summation:

${\displaystyle x_{N}[n]\ {\stackrel {\text{def}}{=}}\sum _{m=-\infty }^{\infty }x[n-mN].}$

It can then be shown that:

{\displaystyle {\begin{aligned}x_{N}*y&=\scriptstyle {\rm {DTFT}}^{-1}\displaystyle {\big [}\scriptstyle {\rm {DTFT}}\displaystyle \{x_{N}\}\cdot \scriptstyle {\rm {DTFT}}\displaystyle \{y\}{\big ]}\\&=\scriptstyle {\rm {DFT}}^{-1}\displaystyle {\big [}\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}{\big ]},\end{aligned}}}

where DFT represents the discrete Fourier transform.

The proof follows from DTFT#Periodic data, which indicates that ${\displaystyle \scriptstyle {DTFT}\displaystyle \{x_{N}\}}$ can be written as:

${\displaystyle \scriptstyle {\rm {DTFT}}\displaystyle \{x_{N}\}(f)={\frac {1}{N}}\sum _{k=-\infty }^{\infty }\left(\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}[k]\right)\cdot \delta \left(f-k/N\right)}$

The product with ${\displaystyle \scriptstyle {\rm {DTFT}}\displaystyle \{y\}(f)}$ is thereby reduced to a discrete-frequency function:

${\displaystyle \scriptstyle {\rm {DTFT}}\displaystyle \{x_{N}\}\cdot \scriptstyle {\rm {DTFT}}\displaystyle \{y\}={\frac {1}{N}}\sum _{k=-\infty }^{\infty }\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}[k]\cdot \underbrace {\scriptstyle {\rm {DTFT}}\displaystyle \{y\}(k/N)} _{\scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}[k]}\cdot \delta \left(f-k/N\right)}$ (also using Sampling the DTFT).

The inverse DTFT is:

{\displaystyle {\begin{aligned}(x_{N}*y)[n]&=\int _{0}^{1}{\frac {1}{N}}\sum _{k=-\infty }^{\infty }\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}[k]\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}[k]\cdot \delta \left(f-k/N\right)\cdot e^{i2\pi fn}df\\&={\frac {1}{N}}\sum _{k=-\infty }^{\infty }\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}[k]\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}[k]\cdot \int _{0}^{1}\delta \left(f-k/N\right)\cdot e^{i2\pi fn}df\\&={\frac {1}{N}}\sum _{k=0}^{N-1}\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}[k]\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}[k]\cdot e^{i2\pi {\frac {n}{N}}k}\\&=\scriptstyle {\rm {DFT}}^{-1}\displaystyle {\big [}\scriptstyle {\rm {DFT}}\displaystyle \{x_{N}\}\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{N}\}{\big ]},\end{aligned}}}

QED.

## Convolution theorem for Fourier series coefficients

Two convolution theorems exist for the Fourier series coefficients of a periodic function:

• The first convolution theorem states that if ${\displaystyle f}$ and ${\displaystyle g}$ are in ${\displaystyle L^{1}([-\pi ,\pi ])}$, the Fourier series coefficients of the 2π-periodic convolution of ${\displaystyle f}$ and ${\displaystyle g}$ are given by:
${\displaystyle [{\widehat {f*_{2\pi }g}}](n)=2\pi \cdot {\widehat {f}}(n)\cdot {\widehat {g}}(n),}$ [nb 1]
where:
{\displaystyle {\begin{aligned}\left[f*_{2\pi }g\right](x)\ &{\stackrel {\mathrm {def} }{=}}\int _{-\pi }^{\pi }f(u)\cdot g[{\text{pv}}(x-u)]\,du,&&{\big (}{\text{and }}\underbrace {{\text{pv}}(x)\ {\stackrel {\mathrm {def} }{=}}\arg \left(e^{ix}\right)} _{\text{principal value}}{\big )}\\&=\int _{-\pi }^{\pi }f(u)\cdot g(x-u)\,du,&&\scriptstyle {\text{when }}g(x){\text{ is 2}}\pi {\text{-periodic.}}\\&=\int _{2\pi }f(u)\cdot g(x-u)\,du,&&\scriptstyle {\text{when both functions are 2}}\pi {\text{-periodic, and the integral is over any 2}}\pi {\text{ interval.}}\end{aligned}}}
• The second convolution theorem states that the Fourier series coefficients of the product of ${\displaystyle f}$ and ${\displaystyle g}$ are given by the discrete convolution of the ${\displaystyle {\hat {f}}}$ and ${\displaystyle {\hat {g}}}$ sequences:
${\displaystyle \left[{\widehat {f\cdot g}}\right](n)=\left[{\widehat {f}}*{\widehat {g}}\right](n).}$

## Notes

1. The scale factor is always equal to the period, 2π in this case.

## Related Research Articles

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

In mathematics, a Fourier series is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a synthesis of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving the weights that describe a given function is a form of Fourier analysis. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.

In mathematics, specifically in harmonic analysis and the theory of topological groups, Pontryagin duality explains the general properties of the Fourier transform on locally compact abelian groups, such as , the circle, or finite cyclic groups. The Pontryagin duality theorem itself states that locally compact abelian groups identify naturally with their bidual.

In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analog of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics and in signal processing, the Hilbert transform is a specific linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). This linear operator is given by convolution with the function :

In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions and of being its own inverse.

In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.

In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of values.

In mathematics, a wavelet series is a representation of a square-integrable function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.

In mathematics, the Babenko–Beckner inequality is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (qp)-norm of the n-dimensional Fourier transform is defined to be

In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.

In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.

## References

• Katznelson, Yitzhak (1976), An introduction to Harmonic Analysis, Dover, ISBN   0-486-63331-4
• Crutchfield, Steve (October 9, 2010), "The Joy of Convolution", Johns Hopkins University, retrieved November 19, 2010