WikiMili The Free Encyclopedia

This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations .(October 2013) (Learn how and when to remove this template message) |

In mathematics, the **convolution theorem** states that under suitable conditions the Fourier transform of a convolution of two signals is the pointwise product of their Fourier transforms. In other words, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Versions of the convolution theorem are true for various Fourier-related transforms. Let and be two functions with convolution . (Note that the asterisk denotes convolution in this context, not standard multiplication. The tensor product symbol is sometimes used instead.)

**Mathematics** includes the study of such topics as quantity, structure (algebra), space (geometry), and change. It has no generally accepted definition.

The **Fourier transform** (**FT**) decomposes a function of time into its constituent frequencies. This is similar to the way a musical chord can be expressed in terms of the volumes and frequencies of its constituent notes. The term *Fourier transform* refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform of a function of time is itself a complex-valued function of frequency, whose magnitude (modulus) represents the amount of that frequency present in the original function, and whose argument is the phase offset of the basic sinusoid in that frequency. The Fourier transform is not limited to functions of time, but the domain of the original function is commonly referred to as the *time domain*. There is also an *inverse Fourier transform* that mathematically synthesizes the original function from its frequency domain representation.

In mathematics **convolution** is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term *convolution* refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reversed and shifted.

- Proof
- Convolution theorem for inverse Fourier transform
- Functions of discrete variable sequences
- Convolution theorem for Fourier series coefficients
- See also
- Notes
- References
- Additional resources

If denotes the Fourier transform operator, then and are the Fourier transforms of and , respectively. Then

In mathematics, an **operator** is generally a mapping that acts on elements of a space to produce elements of another space. The most common operators are linear maps, which act on vector spaces. However, when using "linear operator" instead of "linear map", mathematicians often mean actions on vector spaces of functions, which also preserve other properties, such as continuity. For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators, integral operators or integro-differential operators.

where denotes point-wise multiplication. It also works the other way around:

By applying the inverse Fourier transform , we can write:

and:

The relationships above are only valid for the form of the Fourier transform shown in the Proof section below. The transform may be normalized in other ways, in which case constant scaling factors (typically or ) will appear in the relationships above.

This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups.

In mathematics, the **Laplace transform** is an integral transform named after its inventor Pierre-Simon Laplace. It transforms a function of a real variable *t* to a function of a complex variable s. The transform has many applications in science and engineering.

In mathematics, the **two-sided Laplace transform** or **bilateral Laplace transform** is an integral transform equivalent to probability's moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, and the ordinary or one-sided Laplace transform. If *ƒ*(*t*) is a real or complex valued function of the real variable *t* defined for all real numbers, then the two-sided Laplace transform is defined by the integral

In mathematics, the **Mellin transform** is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

This formulation is especially useful for implementing a numerical convolution on a computer: The standard convolution algorithm has quadratic computational complexity. With the help of the convolution theorem and the fast Fourier transform, the complexity of the convolution can be reduced from to , using big O notation. This can be exploited to construct fast multiplication algorithms, as in Multiplication algorithm § Fourier transform methods.

A **computer** is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called *programs.* These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a **computer system**. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

In algebra, a **quadratic function**, a **quadratic polynomial**, a **polynomial of degree 2**, or simply a **quadratic**, is a polynomial function with one or more variables in which the highest-degree term is of the second degree. For example, a quadratic function in three variables *x*, *y,* and *z* contains exclusively terms *x*^{2}, *y*^{2}, *z*^{2}, *xy*, *xz*, *yz*, *x*, *y*, *z*, and a constant:

In computer science, the **analysis of algorithms** is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes or the number of storage locations it uses. An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.

*The proof here is shown for a particular normalization of the Fourier transform. As mentioned above, if the transform is normalized differently, then constant scaling factors will appear in the derivation.*

In statistics and applications of statistics, **normalization** can have a range of meanings. In the simplest cases, **normalization of ratings** means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of **normalization of scores** in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.

A **scale factor** is a number which scales, or multiplies, some quantity. In the equation *y* = *Cx*, *C* is the scale factor for *x*. *C* is also the coefficient of *x*, and may be called the constant of proportionality of *y* to *x*. For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage.

Let belong to the L^{p}-space . Let be the Fourier transform of and be the Fourier transform of :

where the *dot* between and indicates the inner product of **. Let be the convolution of and **

Also

Hence by Fubini's theorem we have that so its Fourier transform is defined by the integral formula

Note that and hence by the argument above we may apply Fubini's theorem again (i.e. interchange the order of integration):

Substituting yields . Therefore

These two integrals are the definitions of and , so:

QED.

A similar argument, as the above proof, can be applied to the convolution theorem for the inverse Fourier transform;

and:

By similar arguments, it can be shown that the discrete convolution of sequences and is given by:

where **DTFT** represents the discrete-time Fourier transform.

An important special case is the ** circular convolution ** of and defined by where is a periodic summation:

It can then be shown that:

where **DFT** represents the discrete Fourier transform.

The proof follows from DTFT#Periodic data, which indicates that can be written as:

The product with is thereby reduced to a discrete-frequency function:

- (also using Sampling the DTFT).

The inverse DTFT is**:**

QED.

Two convolution theorems exist for the Fourier series coefficients of a periodic function:

- The first convolution theorem states that if and are in , the Fourier series coefficients of the 2π-periodic convolution of and are given by:

^{ [nb 1] }

- where:

- The second convolution theorem states that the Fourier series coefficients of the product of and are given by the discrete convolution of the and sequences:

- ↑ The scale factor is always equal to the period, 2π in this case.

In mathematics, the **discrete Fourier transform** (**DFT**) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

In mathematics, **Fourier analysis** is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

In mathematics, a **Fourier series** is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a **synthesis** of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving the weights that describe a given function is a form of Fourier **analysis**. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.

In mathematics, specifically in harmonic analysis and the theory of topological groups, **Pontryagin duality** explains the general properties of the Fourier transform on locally compact abelian groups, such as , the circle, or finite cyclic groups. The **Pontryagin duality theorem** itself states that locally compact abelian groups identify naturally with their bidual.

In mathematics, the **Fourier inversion theorem** says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

In signal processing, a **finite impulse response** (**FIR**) **filter** is a filter whose impulse response is of *finite* duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In mathematics, the **Poisson summation formula** is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called **Poisson resummation**.

In mathematics, the **Radon transform** is the integral transform which takes a function *f* defined on the plane to a function *Rf* defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analog of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics and in signal processing, the **Hilbert transform** is a specific linear operator that takes a function, *u*(*t*) of a real variable and produces another function of a real variable *H*(*u*)(*t*). This linear operator is given by convolution with the function :

In mathematics, the **Hartley transform** (**HT**) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions and of being its own inverse.

In mathematics, **Parseval's theorem** usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as **Rayleigh's energy theorem**, or **Rayleigh's identity**, after John William Strutt, Lord Rayleigh.

In Fourier analysis, a **multiplier operator** is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the **multiplier** or **symbol**. Occasionally, the term *multiplier operator* itself is shortened simply to *multiplier*. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

In mathematics, the **discrete-time Fourier transform** (**DTFT**) is a form of Fourier analysis that is applicable to a sequence of values.

In mathematics, a **wavelet series** is a representation of a square-integrable function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an **orthonormal wavelet** and of the **integral wavelet transform**.

In mathematics, the **Babenko–Beckner inequality** is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of L^{p} spaces. The **( q, p)-norm** of the

In mathematical analysis and applications, **multidimensional transforms** are used to analyze the frequency content of signals in a domain of two or more dimensions.

In representation theory of mathematics, the **Waldspurger formula** relates the special values of two *L*-functions of two related admissible irreducible representations. Let `k` be the base field, `f` be an automorphic form over `k`, π be the representation associated via the Jacquet–Langlands correspondence with `f`. Goro Shimura (1976) proved this formula, when and `f` is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and `f` is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

In mathematics, **Wiener's lemma** is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.

- Katznelson, Yitzhak (1976),
*An introduction to Harmonic Analysis*, Dover, ISBN 0-486-63331-4 - Weisstein, Eric W. "Convolution Theorem".
*MathWorld*. - Crutchfield, Steve (October 9, 2010), "The Joy of Convolution",
*Johns Hopkins University*, retrieved November 19, 2010

For a visual representation of the use of the convolution theorem in signal processing, see:

- Johns Hopkins University's Java-aided simulation: http://www.jhu.edu/signals/convolve/index.html

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.