# Watson's lemma

Last updated

In mathematics, Watson's lemma, proved by G. N. Watson (1918, p. 133), has significant application within the theory on the asymptotic behavior of integrals.

Prof George Neville Watson FRS HFRSE LLD was an English mathematician, who applied complex analysis to the theory of special functions. His collaboration on the 1915 second edition of E. T. Whittaker's A Course of Modern Analysis (1902) produced the classic "Whittaker and Watson" text. In 1918 he proved a significant result known as Watson's lemma, that has many applications in the theory on the asymptotic behaviour of exponential integrals.

In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two main operations of calculus, with its inverse operation, differentiation, being the other. Given a function f of a real variable x and an interval [a, b] of the real line, the definite integral

## Statement of the lemma

Let ${\displaystyle 0 be fixed. Assume ${\displaystyle \varphi (t)=t^{\lambda }\,g(t)}$, where ${\displaystyle g(t)}$ has an infinite number of derivatives in the neighborhood of ${\displaystyle t=0}$, with ${\displaystyle g(0)\neq 0}$, and ${\displaystyle \lambda >-1}$.

${\displaystyle |\varphi (t)|0,}$

where ${\displaystyle K,b}$ are independent of ${\displaystyle t}$, or that

${\displaystyle \int _{0}^{T}|\varphi (t)|\,\mathrm {d} t<\infty .}$

Then, it is true that for all positive ${\displaystyle x}$ that

${\displaystyle \left|\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|<\infty }$

and that the following asymptotic equivalence holds:

${\displaystyle \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \ \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}},\ \ (x>0,\ x\rightarrow \infty ).}$

See, for instance, Watson (1918) for the original proof or Miller (2006) for a more recent development.

## Proof

We will prove the version of Watson's lemma which assumes that ${\displaystyle |\varphi (t)|}$ has at most exponential growth as ${\displaystyle t\to \infty }$. The basic idea behind the proof is that we will approximate ${\displaystyle g(t)}$ by finitely many terms of its Taylor series. Since the derivatives of ${\displaystyle g}$ are only assumed to exist in a neighborhood of the origin, we will essentially proceed by removing the tail of the integral, applying Taylor's theorem with remainder in the remaining small interval, then adding the tail back on in the end. At each step we will carefully estimate how much we are throwing away or adding on. This proof is a modification of the one found in Miller (2006).

Let ${\displaystyle 0 and suppose that ${\displaystyle \varphi }$ is a measurable function of the form ${\displaystyle \varphi (t)=t^{\lambda }g(t)}$, where ${\displaystyle \lambda >-1}$ and ${\displaystyle g}$ has an infinite number of continuous derivatives in the interval ${\displaystyle [0,\delta ]}$ for some ${\displaystyle 0<\delta , and that ${\displaystyle |\varphi (t)|\leq Ke^{bt}}$ for all ${\displaystyle \delta \leq t\leq T}$, where the constants ${\displaystyle K}$ and ${\displaystyle b}$ are independent of ${\displaystyle t}$.

We can show that the integral is finite for ${\displaystyle x}$ large enough by writing

${\displaystyle (1)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t}$

and estimating each term.

For the first term we have

${\displaystyle \left|\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t\right|\leq \int _{0}^{\delta }e^{-xt}|\varphi (t)|\,\mathrm {d} t\leq \int _{0}^{\delta }|\varphi (t)|\,\mathrm {d} t}$

for ${\displaystyle x\geq 0}$, where the last integral is finite by the assumptions that ${\displaystyle g}$ is continuous on the interval ${\displaystyle [0,\delta ]}$ and that ${\displaystyle \lambda >-1}$. For the second term we use the assumption that ${\displaystyle \varphi }$ is exponentially bounded to see that, for ${\displaystyle x>b}$,

{\displaystyle {\begin{aligned}\left|\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|&\leq \int _{\delta }^{T}e^{-xt}|\varphi (t)|\,\mathrm {d} t\\&\leq K\int _{\delta }^{T}e^{(b-x)t}\,\mathrm {d} t\\&\leq K\int _{\delta }^{\infty }e^{(b-x)t}\,\mathrm {d} t\\&=K\,{\frac {e^{(b-x)\delta }}{x-b}}.\end{aligned}}}

The finiteness of the original integral then follows from applying the triangle inequality to ${\displaystyle (1)}$.

We can deduce from the above calculation that

${\displaystyle (2)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+O\left(x^{-1}e^{-\delta x}\right)}$

as ${\displaystyle x\to \infty }$.

By appealing to Taylor's theorem with remainder we know that, for each integer ${\displaystyle N\geq 0}$,

${\displaystyle g(t)=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\,t^{n}+{\frac {g^{(N+1)}(t^{*})}{(N+1)!}}\,t^{N+1}}$

for ${\displaystyle 0\leq t\leq \delta }$, where ${\displaystyle 0\leq t^{*}\leq t}$. Plugging this in to the first term in ${\displaystyle (2)}$ we get

{\displaystyle {\begin{aligned}(3)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\int _{0}^{\delta }e^{-xt}t^{\lambda }g(t)\,\mathrm {d} t\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+{\frac {1}{(N+1)!}}\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t.\end{aligned}}}

To bound the term involving the remainder we use the assumption that ${\displaystyle g^{(N+1)}}$ is continuous on the interval ${\displaystyle [0,\delta ]}$, and in particular it is bounded there. As such we see that

{\displaystyle {\begin{aligned}\left|\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\right|&\leq \sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\delta }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&<\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\infty }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&=\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\,{\frac {\Gamma (\lambda +N+2)}{x^{\lambda +N+2}}}.\end{aligned}}}

Here we have used the fact that

${\displaystyle \int _{0}^{\infty }t^{a}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (a+1)}{x^{a+1}}}}$

if ${\displaystyle x>0}$ and ${\displaystyle a>-1}$, where ${\displaystyle \Gamma }$ is the gamma function.

In mathematics, the gamma function is one of the extensions of the factorial function with its argument shifted down by 1, to real and complex numbers. Derived by Daniel Bernoulli, if n is a positive integer,

From the above calculation we see from ${\displaystyle (3)}$ that

${\displaystyle (4)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+O\left(x^{-\lambda -N-2}\right)}$

as ${\displaystyle x\to \infty }$.

We will now add the tails on to each integral in ${\displaystyle (4)}$. For each ${\displaystyle n}$ we have

{\displaystyle {\begin{aligned}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t\\[5pt]&={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t,\end{aligned}}}

and we will show that the remaining integrals are exponentially small. Indeed, if we make the change of variables ${\displaystyle t=s+\delta }$ we get

{\displaystyle {\begin{aligned}\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-x(s+\delta )}\,ds\\[5pt]&=e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-xs}\,ds\\[5pt]&\leq e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-s}\,ds\end{aligned}}}

for ${\displaystyle x\geq 1}$, so that

${\displaystyle \int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right){\text{ as }}x\to \infty .}$

If we substitute this last result into ${\displaystyle (4)}$ we find that

{\displaystyle {\begin{aligned}\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right)+O\left(x^{-\lambda -N-2}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}}

as ${\displaystyle x\to \infty }$. Finally, substituting this into ${\displaystyle (2)}$ we conclude that

{\displaystyle {\begin{aligned}\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)+O\left(x^{-1}e^{-\delta x}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}}

as ${\displaystyle x\to \infty }$.

Since this last expression is true for each integer ${\displaystyle N\geq 0}$ we have thus shown that

${\displaystyle \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}}$

as ${\displaystyle x\to \infty }$, where the infinite series is interpreted as an asymptotic expansion of the integral in question.

## Example

When ${\displaystyle 0, the confluent hypergeometric function of the first kind has the integral representation

${\displaystyle {}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\int _{0}^{1}e^{xt}t^{a-1}(1-t)^{b-a-1}\,\mathrm {d} t,}$

where ${\displaystyle \Gamma }$ is the gamma function. The change of variables ${\displaystyle t=1-s}$ puts this into the form

${\displaystyle {}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\,e^{x}\int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds,}$

which is now amenable to the use of Watson's lemma. Taking ${\displaystyle \lambda =b-a-1}$ and ${\displaystyle g(s)=(1-s)^{a-1}}$, Watson's lemma tells us that

${\displaystyle \int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds\sim \Gamma (b-a)x^{a-b}\quad {\text{as }}x\to \infty {\text{ with }}x>0,}$

which allows us to conclude that

${\displaystyle {}_{1}F_{1}(a,b,x)\sim {\frac {\Gamma (b)}{\Gamma (a)}}\,x^{a-b}e^{x}\quad {\text{as }}x\to \infty {\text{ with }}x>0.}$

## Related Research Articles

Synchrotron radiation is the electromagnetic radiation emitted when charged particles are accelerated radially, i.e., when they are subject to an acceleration perpendicular to their velocity. It is produced, for example, in synchrotrons using bending magnets, undulators and/or wigglers. If the particle is non-relativistic, then the emission is called cyclotron emission. If, on the other hand, the particles are relativistic, sometimes referred to as ultrarelativistic, the emission is called synchrotron emission. Synchrotron radiation may be achieved artificially in synchrotrons or storage rings, or naturally by fast electrons moving through magnetic fields. The radiation produced in this way has a characteristic polarization and the frequencies generated can range over the entire electromagnetic spectrum which is also called continuum radiation.

In mathematics, Green's theorem gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green, though its first proof is due to Bernhard Riemann and is the two-dimensional special case of the more general Kelvin–Stokes theorem.

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proven by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In mathematics, a Green's function of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions is its impulse response.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:

In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.

Feynman parametrization is a technique for evaluating loop integrals which arise from Feynman diagrams with one or more loops. However, it is sometimes useful in integration in areas of pure mathematics as well.

In mathematics, the Riesz mean is a certain mean of the terms in a series. They were introduced by Marcel Riesz in 1911 as an improvement over the Cesàro mean. The Riesz mean should not be confused with the Bochner–Riesz mean or the Strong–Riesz mean.

In mathematics, the Schur orthogonality relations, which is proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3).

There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.

In mathematics, an analytic semigroup is particular kind of strongly continuous semigroup. Analytic semigroups are used in the solution of partial differential equations; compared to strongly continuous semigroups, analytic semigroups provide better regularity of solutions to initial value problems, better results concerning perturbations of the infinitesimal generator, and a relationship between the type of the semigroup and the spectrum of the infinitesimal generator.

In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on L2(X). In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are Eigenforms of the hyperbolic Laplace Operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to the modular forms the Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics, due to Cees M. Fortuin, Pieter W. Kasteleyn, and Jean Ginibre (1971). Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated.

The Kelvin–Stokes theorem, named after Lord Kelvin and George Stokes, also known as the Stokes' theorem, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface.

In probability theory and directional statistics, a wrapped exponential distribution is a wrapped probability distribution that results from the "wrapping" of the exponential distribution around the unit circle.

In mathematics, Ramanujan's master theorem is a technique that provides an analytic expression for the Mellin transform of an analytic function.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used for discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

In the theory of Lie groups, Lie algebras and their representation theory, a Lie algebra extensione is an enlargement of a given Lie algebra g by another Lie algebra h. Extensions arise in several ways. There is the trivial extension obtained by taking a direct sum of two Lie algebras. Other types are the split extension and the central extension. Extensions may arise naturally, for instance, when forming a Lie algebra from projective group representations. Such a Lie algebra will contain central charges.

## References

• Miller, P.D. (2006), Applied Asymptotic Analysis, Providence, RI: American Mathematical Society, p. 467, ISBN   978-0-8218-4078-8 .
• Watson, G. N. (1918), "The harmonic functions associated with the parabolic cylinder", Proceedings of the London Mathematical Society, 2 (17), pp. 116148, doi:10.1112/plms/s2-17.1.116 .