Multiplier (Fourier analysis)

Last updated

In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. [1] In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some (very mild) regularity conditions can be expressed as a multiplier operator, and conversely. [2] Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

Contents

In signal processing, a multiplier operator is called a "filter", and the multiplier is the filter's frequency response (or transfer function).

In the wider context, multiplier operators are special cases of spectral multiplier operators, which arise from the functional calculus of an operator (or family of commuting operators). They are also special cases of pseudo-differential operators, and more generally Fourier integral operators. There are natural questions in this field that are still open, such as characterizing the Lp bounded multiplier operators (see below).

Multiplier operators are unrelated to Lagrange multipliers, except that they both involve the multiplication operation.

For the necessary background on the Fourier transform, see that page. Additional important background may be found on the pages operator norm and Lp space.

Examples

In the setting of periodic functions defined on the unit circle, the Fourier transform of a function is simply the sequence of its Fourier coefficients. To see that differentiation can be realized as multiplier, consider the Fourier series for the derivative of a periodic function After using integration by parts in the definition of the Fourier coefficient we have that

.

So, formally, it follows that the Fourier series for the derivative is simply the Fourier series for multiplied by a factor . This is the same as saying that differentiation is a multiplier operator with multiplier .

An example of a multiplier operator acting on functions on the real line is the Hilbert transform. It can be shown that the Hilbert transform is a multiplier operator whose multiplier is given by the , where sgn is the signum function.

Finally another important example of a multiplier is the characteristic function of the unit cube in which arises in the study of "partial sums" for the Fourier transform (see Convergence of Fourier series).

Definition

Multiplier operators can be defined on any group G for which the Fourier transform is also defined (in particular, on any locally compact abelian group). The general definition is as follows. If is a sufficiently regular function, let denote its Fourier transform (where is the Pontryagin dual of G). Let denote another function, which we shall call the multiplier. Then the multiplier operator associated to this symbol m is defined via the formula

In other words, the Fourier transform of Tf at a frequency ξ is given by the Fourier transform of f at that frequency, multiplied by the value of the multiplier at that frequency. This explains the terminology "multiplier".

Note that the above definition only defines Tf implicitly; in order to recover Tf explicitly one needs to invert the Fourier transform. This can be easily done if both f and m are sufficiently smooth and integrable. One of the major problems in the subject is to determine, for any specified multiplier m, whether the corresponding Fourier multiplier operator continues to be well-defined when f has very low regularity, for instance if it is only assumed to lie in an Lp space. See the discussion on the "boundedness problem" below. As a bare minimum, one usually requires the multiplier m to be bounded and measurable; this is sufficient to establish boundedness on but is in general not strong enough to give boundedness on other spaces.

One can view the multiplier operator T as the composition of three operators, namely the Fourier transform, the operation of pointwise multiplication by m, and then the inverse Fourier transform. Equivalently, T is the conjugation of the pointwise multiplication operator by the Fourier transform. Thus one can think of multiplier operators as operators which are diagonalized by the Fourier transform.

Multiplier operators on common groups

We now specialize the above general definition to specific groups G. First consider the unit circle functions on G can thus be thought of as 2π-periodic functions on the real line. In this group, the Pontryagin dual is the group of integers, The Fourier transform (for sufficiently regular functions f) is given by

and the inverse Fourier transform is given by

A multiplier in this setting is simply a sequence of numbers, and the operator associated to this multiplier is then given by the formula

at least for sufficiently well-behaved choices of the multiplier and the function f.

Now let G be a Euclidean space . Here the dual group is also Euclidean, and the Fourier and inverse Fourier transforms are given by the formulae

A multiplier in this setting is a function and the associated multiplier operator is defined by

again assuming sufficiently strong regularity and boundedness assumptions on the multiplier and function.

In the sense of distributions, there is no difference between multiplier operators and convolution operators; every multiplier T can also be expressed in the form Tf = fK for some distribution K, known as the convolution kernel of T. In this view, translation by an amount x0 is convolution with a Dirac delta function δ(·  x0), differentiation is convolution with δ'. Further examples are given in the table below.

Diagrams

Fourier multiplier diagram.png

Further examples

On the unit circle

The following table shows some common examples of multiplier operators on the unit circle

NameMultiplier, Operator, Kernel,
Identity operator1f(t) Dirac delta function
Multiplication by a constant cccf(t)
Translation by sf(t  s)
Differentiation in
k-fold differentiation
Constant coefficient differential operator
Fractional derivative of order
Mean value1
Mean-free component
Integration (of mean-free component) Sawtooth function
Periodic Hilbert transform H
Dirichlet summation Dirichlet kernel
Fejér summation Fejér kernel
General multiplier
General convolution operator

On the Euclidean space

The following table shows some common examples of multiplier operators on Euclidean space .

NameMultiplier, Operator, Kernel,
Identity operator1f(x)
Multiplication by a constant cccf(x)
Translation by y
Derivative (one dimension only)
Partial derivative
Laplacian
Constant coefficient differential operator
Fractional derivative of order
Riesz potential of order
Bessel potential of order
Heat flow operator Heat kernel
Schrödinger equation evolution operator Schrödinger kernel
Hilbert transform H (one dimension only)
Riesz transforms Rj
Partial Fourier integral (one dimension only)
Disk multiplier (J is a Bessel function)
Bochner–Riesz operators
General multiplier
General convolution operator

General considerations

The map is a homomorphism of C*-algebras. This follows because the sum of two multiplier operators and is a multiplier operators with multiplier , the composition of these two multiplier operators is a multiplier operator with multiplier and the adjoint of a multiplier operator is another multiplier operator with multiplier .

In particular, we see that any two multiplier operators commute with each other. It is known that multiplier operators are translation-invariant. Conversely, one can show that any translation-invariant linear operator which is bounded on L2(G) is a multiplier operator.

The Lp boundedness problem

The Lp boundedness problem (for any particular p) for a given group G is, stated simply, to identify the multipliers m such that the corresponding multiplier operator is bounded from Lp(G) to Lp(G). Such multipliers are usually simply referred to as "Lp multipliers". Note that as multiplier operators are always linear, such operators are bounded if and only if they are continuous. This problem is considered to be extremely difficult in general, but many special cases can be treated. The problem depends greatly on p, although there is a duality relationship: if and 1 ≤ p, q ≤ ∞, then a multiplier operator is bounded on Lp if and only if it is bounded on Lq.

The Riesz-Thorin theorem shows that if a multiplier operator is bounded on two different Lp spaces, then it is also bounded on all intermediate spaces. Hence we get that the space of multipliers is smallest for L1 and L and grows as one approaches L2, which has the largest multiplier space.

Boundedness on L2

This is the easiest case. Parseval's theorem allows to solve this problem completely and obtain that a function m is an L2(G) multiplier if and only if it is bounded and measurable.

Boundedness on L1 or L

This case is more complicated than the Hilbertian (L2) case, but is fully resolved. The following is true:

Theorem: In the euclidean space a function is anL1multiplier (equivalently an L multiplier) if and only if there exists a finite Borel measure μ such thatmis the Fourier transform of μ.

(The "if" part is a simple calculation. The "only if" part here is more complicated.)

Boundedness on Lp for 1 < p <

In this general case, necessary and sufficient conditions for boundedness have not been established, even for Euclidean space or the unit circle. However, several necessary conditions and several sufficient conditions are known. For instance it is known that in order for a multiplier operator to be bounded on even a single Lp space, the multiplier must be bounded and measurable (this follows from the characterisation of L2 multipliers above and the inclusion property). However, this is not sufficient except when p = 2.

Results that give sufficient conditions for boundedness are known as multiplier theorems. Three such results are given below.

Marcinkiewicz multiplier theorem

Let be a bounded function that is continuously differentiable on every set of the form [ clarification needed ] for and has derivative such that

Then m is an Lp multiplier for all 1 < p < ∞.

Mikhlin multiplier theorem

Let m be a bounded function on which is smooth except possibly at the origin, and such that the function is bounded for all integers : then m is an Lp multiplier for all 1 < p < ∞.

This is a special case of the Hörmander-Mikhlin multiplier theorem.

The proofs of these two theorems are fairly tricky, involving techniques from Calderón–Zygmund theory and the Marcinkiewicz interpolation theorem: for the original proof, see Mikhlin (1956) or Mikhlin (1965 , pp. 225–240).

Radial multipliers

For radial multipliers, a necessary and sufficient condition for boundedness is known for some partial range of . Let and . Suppose that is a radial multiplier compactly supported away from the origin. Then is an multiplier if and only if the Fourier transform of belongs to .

This is a theorem of Heo, Nazarov, and Seeger. [3] They also provided a necessary and sufficient condition which is valid without the compact support assumption on .

Examples

Translations are bounded operators on any Lp. Differentiation is not bounded on any Lp. The Hilbert transform is bounded only for p strictly between 1 and ∞. The fact that it is unbounded on L is easy, since it is well known that the Hilbert transform of a step function is unbounded. Duality gives the same for p = 1. However, both the Marcinkiewicz and Mikhlin multiplier theorems show that the Hilbert transform is bounded in Lp for all 1 < p < ∞.

Another interesting case on the unit circle is when the sequence that is being proposed as a multiplier is constant for n in each of the sets and From the Marcinkiewicz multiplier theorem (adapted to the context of the unit circle) we see that any such sequence (also assumed to be bounded, of course)[ clarification needed ] is a multiplier for every 1 < p < ∞.

In one dimension, the disk multiplier operator (see table above) is bounded on Lp for every 1 < p < ∞. However, in 1972, Charles Fefferman showed the surprising result that in two and higher dimensions the disk multiplier operator is unbounded on Lp for every p ≠ 2. The corresponding problem for BochnerRiesz multipliers is only partially solved; see also Bochner–Riesz conjecture.

See also

Notes

  1. Duoandikoetxea 2001, Section 3.5.
  2. Stein 1970, Chapter II.
  3. Heo, Yaryong; Nazarov, Fëdor; Seeger, Andreas. Radial Fourier multipliers in high dimensions. Acta Math. 206 (2011), no. 1, 55--92. doi:10.1007/s11511-011-0059-x. https://projecteuclid.org/euclid.acta/1485892528

Works cited

General references

Related Research Articles

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. The identity asserts the equality of the energy of a periodic signal and the energy of its frequency domain representation. Geometrically, it is a generalized Pythagorean theorem for inner-product spaces.

In mathematics, the Plancherel theorem is a result in harmonic analysis, proven by Michel Plancherel in 1910. It is a generalization of Parseval's theorem; often used in the fields of science and engineering, proving the unitarity of the Fourier transform.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

<span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

In mathematics, a Dirac comb is a periodic function with the formula for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb, hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.

In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.

In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz

<span class="mw-page-title-main">Sine and cosine transforms</span> Variant Fourier transforms

In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis.

In mathematics, the Balian–Low theorem in Fourier analysis is named for Roger Balian and Francis E. Low. The theorem states that there is no well-localized window function g either in time or frequency for an exact Gabor frame.

In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases.

The Bochner–Riesz mean is a summability method often used in harmonic analysis when considering convergence of Fourier series and Fourier integrals. It was introduced by Salomon Bochner as a modification of the Riesz mean.

The Hausdorff−Young inequality is a foundational result in the mathematical field of Fourier analysis. As a statement about Fourier series, it was discovered by William Henry Young and extended by Hausdorff. It is now typically understood as a rather direct corollary of the Plancherel theorem, found in 1910, in combination with the Riesz-Thorin theorem, originally discovered by Marcel Riesz in 1927. With this machinery, it readily admits several generalizations, including to multidimensional Fourier series and to the Fourier transform on the real line, Euclidean spaces, as well as more general spaces. With these extensions, it is one of the best-known results of Fourier analysis, appearing in nearly every introductory graduate-level textbook on the subject.

Carleson's theorem is a fundamental result in mathematical analysis establishing the pointwise (Lebesgue) almost everywhere convergence of Fourier series of L2 functions, proved by Lennart Carleson. The name is also often used to refer to the extension of the result by Richard Hunt to Lp functions for p(1, ∞] and the analogous results for pointwise almost everywhere convergence of Fourier integrals, which can be shown to be equivalent by transference methods.

In harmonic analysis, a field within mathematics, Littlewood–Paley theory is a theoretical framework used to extend certain results about L2 functions to Lp functions for 1 < p < ∞. It is typically used as a substitute for orthogonality arguments which only apply to Lp functions when p = 2. One implementation involves studying a function by decomposing it in terms of functions with localized frequencies, and using the Littlewood–Paley g-function to compare it with its Poisson integral. The 1-variable case was originated by J. E. Littlewood and R. Paley and developed further by Polish mathematicians A. Zygmund and J. Marcinkiewicz in the 1930s using complex function theory. E. M. Stein later extended the theory to higher dimensions using real variable techniques.

In mathematics, singular integral operators of convolution type are the singular integral operators that arise on Rn and Tn through convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples in harmonic analysis are the harmonic conjugation operator on the circle, the Hilbert transform on the circle and the real line, the Beurling transform in the complex plane and the Riesz transforms in Euclidean space. The continuity of these operators on L2 is evident because the Fourier transform converts them into multiplication operators. Continuity on Lp spaces was first established by Marcel Riesz. The classical techniques include the use of Poisson integrals, interpolation theory and the Hardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced by Alberto Calderón and Antoni Zygmund in 1952, were developed by a number of authors to give general criteria for continuity on Lp spaces. This article explains the theory for the classical operators and sketches the subsequent general theory.

In mathematics, Cauchy wavelets are a family of continuous wavelets, used in the continuous wavelet transform.