Borel summation

Last updated

Borel, then an unknown young man, discovered that his summation method gave the 'right' answer for many classical divergent series. He decided to make a pilgrimage to Stockholm to see Mittag-Leffler, who was the recognized lord of complex analysis. Mittag-Leffler listened politely to what Borel had to say and then, placing his hand upon the complete works by Weierstrass, his teacher, he said in Latin, 'The Master forbids it'.

Contents

Mark Kac, quoted by Reed & Simon (1978, p. 38)

In mathematics, Borel summation is a summation method for divergent series, introduced by ÉmileBorel  ( 1899 ). It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several variations of this method that are also called Borel summation, and a generalization of it called Mittag-Leffler summation.

Definition

There are (at least) three slightly different methods called Borel summation. They differ in which series they can sum, but are consistent, meaning that if two of the methods sum the same series they give the same answer.

Throughout let A(z) denote a formal power series

and define the Borel transform of A to be its equivalent exponential series

Borel's exponential summation method

Let An(z) denote the partial sum

A weak form of Borel's summation method defines the Borel sum of A to be

If this converges at z  C to some function a(z), we say that the weak Borel sum of A converges at z, and write .

Borel's integral summation method

Suppose that the Borel transform converges for all positive real numbers to a function growing sufficiently slowly that the following integral is well defined (as an improper integral), the Borel sum of A is given by

If the integral converges at z  C to some a(z), we say that the Borel sum of A converges at z, and write .

Borel's integral summation method with analytic continuation

This is similar to Borel's integral summation method, except that the Borel transform need not converge for all t, but converges to an analytic function of t near 0 that can be analytically continued along the positive real axis.

Basic properties

Regularity

The methods (B) and (wB) are both regular summation methods, meaning that whenever A(z) converges (in the standard sense), then the Borel sum and weak Borel sum also converge, and do so to the same value. i.e.

Regularity of (B) is easily seen by a change in order of integration, which is valid due to absolute convergence: if A(z) is convergent at z, then

where the rightmost expression is exactly the Borel sum at z.

Regularity of (B) and (wB) imply that these methods provide analytic extensions to A(z).

Nonequivalence of Borel and weak Borel summation

Any series A(z) that is weak Borel summable at z  C is also Borel summable at z. However, one can construct examples of series which are divergent under weak Borel summation, but which are Borel summable. The following theorem characterises the equivalence of the two methods.

Theorem (( Hardy 1992 , 8.5)).
Let A(z) be a formal power series, and fix z  C, then:
  1. If , then .
  2. If , and then .

Relationship to other summation methods

Uniqueness theorems

There are always many different functions with any given asymptotic expansion. However, there is sometimes a best possible function, in the sense that the errors in the finite-dimensional approximations are as small as possible in some region. Watson's theorem and Carleman's theorem show that Borel summation produces such a best possible sum of the series.

Watson's theorem

Watson's theorem gives conditions for a function to be the Borel sum of its asymptotic series. Suppose that f is a function satisfying the following conditions:

is bounded by

for all z in the region (for some positive constant C).

Then Watson's theorem says that in this region f is given by the Borel sum of its asymptotic series. More precisely, the series for the Borel transform converges in a neighborhood of the origin, and can be analytically continued to the positive real axis, and the integral defining the Borel sum converges to f(z) for z in the region above.

Carleman's theorem

Carleman's theorem shows that a function is uniquely determined by an asymptotic series in a sector provided the errors in the finite order approximations do not grow too fast. More precisely it states that if f is analytic in the interior of the sector |z| < C, Re(z) > 0 and |f(z)| < |bnz|n in this region for all n, then f is zero provided that the series 1/b0 + 1/b1 + ... diverges.

Carleman's theorem gives a summation method for any asymptotic series whose terms do not grow too fast, as the sum can be defined to be the unique function with this asymptotic series in a suitable sector if it exists. Borel summation is slightly weaker than special case of this when bn =cn for some constant c. More generally one can define summation methods slightly stronger than Borel's by taking the numbers bn to be slightly larger, for example bn = cnlog n or bn =cnlog n log log n. In practice this generalization is of little use, as there are almost no natural examples of series summable by this method that cannot also be summed by Borel's method.

Example

The function f(z) = exp(–1/z) has the asymptotic series 0 + 0z + ... with an error bound of the form above in the region |arg(z)| < θ for any θ < π/2, but is not given by the Borel sum of its asymptotic series. This shows that the number π/2 in Watson's theorem cannot be replaced by any smaller number (unless the bound on the error is made smaller).

Examples

The geometric series

Consider the geometric series

which converges (in the standard sense) to 1/(1  z) for |z| < 1. The Borel transform is

from which we obtain the Borel sum

which converges in the larger region Re(z) < 1, giving an analytic continuation of the original series.

Considering instead the weak Borel transform, the partial sums are given by AN(z) = (1  zN+1)/(1  z), and so the weak Borel sum is

where, again, convergence is on Re(z) < 1. Alternatively this can be seen by appealing to part 2 of the equivalence theorem, since for Re(z) < 1,

An alternating factorial series

Consider the series

then A(z) does not converge for any nonzero z  C. The Borel transform is

for |t| < 1, which can be analytically continued to all t  0. So the Borel sum is

(where Γ is the incomplete gamma function).

This integral converges for all z  0, so the original divergent series is Borel summable for all such z. This function has an asymptotic expansion as z tends to 0 that is given by the original divergent series. This is a typical example of the fact that Borel summation will sometimes "correctly" sum divergent asymptotic expansions.

Again, since

for all z, the equivalence theorem ensures that weak Borel summation has the same domain of convergence, z  0.

An example in which equivalence fails

The following example extends on that given in ( Hardy 1992 , 8.5). Consider

After changing the order of summation, the Borel transform is given by

At z = 2 the Borel sum is given by

where S(x) is the Fresnel integral. Via the convergence theorem along chords, the Borel integral converges for all z  2 (the integral diverges for z > 2).

For the weak Borel sum we note that

holds only for z < 1, and so the weak Borel sum converges on this smaller domain.

Existence results and the domain of convergence

Summability on chords

If a formal series A(z) is Borel summable at z0  C, then it is also Borel summable at all points on the chord Oz0 connecting z0 to the origin. Moreover, there exists a function a(z) analytic throughout the disk with radius Oz0 such that

for all z = θz0, θ  [0,1].

An immediate consequence is that the domain of convergence of the Borel sum is a star domain in C. More can be said about the domain of convergence of the Borel sum, than that it is a star domain, which is referred to as the Borel polygon, and is determined by the singularities of the series A(z).

The Borel polygon

Suppose that A(z) has strictly positive radius of convergence, so that it is analytic in a non-trivial region containing the origin, and let SA denote the set of singularities of A. This means that P  SA if and only if A can be continued analytically along the open chord from 0 to P, but not to P itself. For P  SA, let LP denote the line passing through P which is perpendicular to the chord OP. Define the sets

the set of points which lie on the same side of LP as the origin. The Borel polygon of A is the set

An alternative definition was used by Borel and Phragmén ( Sansone & Gerretsen 1960 , 8.3). Let denote the largest star domain on which there is an analytic extension of A, then is the largest subset of such that for all the interior of the circle with diameter OP is contained in . Referring to the set as a polygon is somewhat of a misnomer, since the set need not be polygonal at all; if, however, A(z) has only finitely many singularities then will in fact be a polygon.

The following theorem, due to Borel and Phragmén provides convergence criteria for Borel summation.

Theorem( Hardy 1992 , 8.8).
The series A(z) is (B) summable at all , and is (B) divergent at all .

Note that (B) summability for depends on the nature of the point.

Example 1

Let ωi  C denote the m-th roots of unity, i = 1, ..., m, and consider

which converges on B(0,1)  C. Seen as a function on C, A(z) has singularities at SA = {ωi : i = 1, ..., m}, and consequently the Borel polygon is given by the regular m-gon centred at the origin, and such that 1  C is a midpoint of an edge.

Example 2

The formal series

converges for all (for instance, by the comparison test with the geometric series). It can however be shown [2] that A does not converge for any point z  C such that z2n = 1 for some n. Since the set of such z is dense in the unit circle, there can be no analytic extension of A outside of B(0,1). Subsequently the largest star domain to which A can be analytically extended is S = B(0,1) from which (via the second definition) one obtains . In particular one sees that the Borel polygon is not polygonal.

A Tauberian theorem

A Tauberian theorem provides conditions under which convergence of one summation method implies convergence under another method. The principal Tauberian theorem [1] for Borel summation provides conditions under which the weak Borel method implies convergence of the series.

Theorem( Hardy 1992 , 9.13). If A is (wB) summable at z0  C, , and
then , and the series converges for all |z| < |z0|.

Applications

Borel summation finds application in perturbation expansions in quantum field theory. In particular in 2-dimensional Euclidean field theory the Schwinger functions can often be recovered from their perturbation series using Borel summation ( Glimm & Jaffe 1987 , p. 461). Some of the singularities of the Borel transform are related to instantons and renormalons in quantum field theory ( Weinberg 2005 , 20.7).

Generalizations

Borel summation requires that the coefficients do not grow too fast: more precisely, an has to be bounded by n!Cn+1 for some C. There is a variation of Borel summation that replaces factorials n! with (kn)! for some positive integer k, which allows the summation of some series with an bounded by (kn)!Cn+1 for some C. This generalization is given by Mittag-Leffler summation.

In the most general case, Borel summation is generalized by Nachbin resummation, which can be used when the bounding function is of some general type (psi-type), instead of being exponential type.

See also

Notes

  1. 1 2 Hardy, G. H. (1992). Divergent Series. AMS Chelsea, Rhode Island.
  2. "Natural Boundary". MathWorld. Retrieved 19 October 2016.

Related Research Articles

In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.

In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Taylor's theorem</span> Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where the infinite series representation which initially defined the function becomes divergent.

In mathematics, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel, who proved it in 1826.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.

In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function.

In mathematical analysis, Cesàro summation assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series.

In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.

<span class="mw-page-title-main">Dirichlet integral</span> Integral of sin(x)/x from 0 to infinity.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

In mathematics, the Leibniz formula for π, named after Gottfried Wilhelm Leibniz, states that

<span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

In mathematics, a Dirac comb is a periodic function with the formula

In mathematics, in the area of complex analysis, Nachbin's theorem is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, given below.

<span class="mw-page-title-main">Ramanujan's master theorem</span> Mathematical theorem

In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.

In mathematics, Mittag-Leffler summation is any of several variations of the Borel summation method for summing possibly divergent formal power series, introduced by Mittag-Leffler

In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.

References