Stationary phase approximation

Last updated

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.

Contents

This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin. [1] It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times[ clarification needed ].

Formula

Letting denote the set of critical points of the function (i.e. points where ), under the assumption that is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. for ) we have the following asymptotic formula, as :

Here denotes the Hessian of , and denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.

For , this reduces to:

In this case the assumptions on reduce to all the critical points being non-degenerate.

This is just the Wick-rotated version of the formula for the method of steepest descent.

An example

Consider a function

.

The phase term in this function, , is stationary when

or equivalently,

.

Solutions to this equation yield dominant frequencies for some and . If we expand as a Taylor series about and neglect terms of order higher than , we have

where denotes the second derivative of . When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,

.
.

This integrates to

.

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by

.

The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take

,

then Fubini's theorem reduces I(k) to a product of integrals over the real line like

with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function).

One-dimensional case

The essential statement is this one:

.

In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, . [2]

This is the model for all one-dimensional integrals with having a single non-degenerate critical point at which has second derivative . In fact the model case has second derivative 2 at 0. In order to scale using , observe that replacing by where is constant is the same as scaling by . It follows that for general values of , the factor becomes

.

For one uses the complex conjugate formula, as mentioned before.

Lower-order terms

As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved .

See also

Notes

  1. Courant, Richard; Hilbert, David (1953), Methods of mathematical physics , vol. 1 (2nd revised ed.), New York: Interscience Publishers, p. 474, OCLC   505700
  2. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119 or Jean Dieudonné, Calcul Infinitésimal, p.135.

Related Research Articles

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In mathematical analysis, Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. It states that if a function is integrable on a rectangle , then one can evaluate the double integral as an iterated integral:

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but does not converge uniformly i.e. does not converge with respect to the supremum norm.

In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.

<span class="mw-page-title-main">Monte Carlo integration</span> Numerical technique

In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.

<span class="mw-page-title-main">Friedmann equations</span> Equations in physical cosmology

The Friedmann equations, also known as the Friedmann-Lemaître or FL equations, are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924.

<span class="mw-page-title-main">Rectangular function</span> Function whose graph is 0, then 1, then 0 again, in an almost-everywhere continuous way

The rectangular function is defined as

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.

In calculus, the Leibniz integral rule for differentiation under the integral sign states that for an integral of the form

In mathematics, a locally integrable function is a function which is integrable on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain : in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.

In the mathematical field of geometric measure theory, the coarea formula expresses the integral of a function over an open set in Euclidean space in terms of integrals over the level sets of another function. A special case is Fubini's theorem, which says under suitable hypotheses that the integral of a function over the region enclosed by a rectangular box can be written as the iterated integral over the level sets of the coordinate functions. Another special case is integration in spherical coordinates, in which the integral of a function on Rn is related to the integral of the function over spherical shells: level sets of the radial function. The formula plays a decisive role in the modern study of isoperimetric problems.

In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

References