Fourier integral operator

Last updated

In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases.

Contents

A Fourier integral operator is given by:

where denotes the Fourier transform of , is a standard symbol which is compactly supported in and is real valued and homogeneous of degree in . It is also necessary to require that on the support of a. Under these conditions, if a is of order zero, it is possible to show that defines a bounded operator from to . [1]

Examples

One motivation for the study of Fourier integral operators is the solution operator for the initial value problem for the wave operator. Indeed, consider the following problem:

and

The solution to this problem is given by

These need to be interpreted as oscillatory integrals since they do not in general converge. This formally looks like a sum of two Fourier integral operators, however the coefficients in each of the integrals are not smooth at the origin, and so not standard symbols. If we cut out this singularity with a cutoff function, then the so obtained operators still provide solutions to the initial value problem modulo smooth functions. Thus, if we are only interested in the propagation of singularities of the initial data, it is sufficient to consider such operators. In fact, if we allow the sound speed c in the wave equation to vary with position we can still find a Fourier integral operator that provides a solution modulo smooth functions, and Fourier integral operators thus provide a useful tool for studying the propagation of singularities of solutions to variable speed wave equations, and more generally for other hyperbolic equations.

See also

Notes

  1. Hörmander, Lars (1970), "Fourier integral operators. I", Acta Mathematica, Springer Netherlands, 127: 79–183, doi: 10.1007/BF02392052

Related Research Articles

<span class="mw-page-title-main">Wave equation</span> Differential equation important in physics

The (two-way) wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields – as they occur in classical physics – such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics. Single mechanical or electromagnetic waves propagating in a pre-defined direction can also be described with the first-order one-way wave equation, which is much easier to solve and also valid for inhomogeneous media.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical physics, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics and mathematics, the Fourier transform (FT) is a transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into terms of the intensity of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Integration by parts</span> Mathematical method in calculus

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.

In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948.

In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.

In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.

In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.

In mathematics, the Bessel potential is a potential similar to the Riesz potential but with better decay properties at infinity.

In the theory of partial differential equations, Holmgren's uniqueness theorem, or simply Holmgren's theorem, named after the Swedish mathematician Erik Albert Holmgren (1873–1943), is a uniqueness result for linear partial differential equations with real analytic coefficients.

The system size expansion, also known as van Kampen's expansion or the Ω-expansion, is a technique pioneered by Nico van Kampen used in the analysis of stochastic processes. Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation, in which the master equation is approximated by a Fokker–Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system.

The Leray projection, named after Jean Leray, is a linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics. Informally, it can be seen as the projection on the divergence-free vector fields. It is used in particular to eliminate both the pressure term and the divergence-free term in the Stokes equations and Navier–Stokes equations.

References