Series acceleration

Last updated

In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

Contents

Definition

Given a sequence

having a limit

an accelerated series is a second sequence

which converges faster to than the original sequence, in the sense that

If the original sequence is divergent, the sequence transformation acts as an extrapolation method to the antilimit .

The mappings from the original to the transformed series may be linear (as defined in the article sequence transformations), or non-linear. In general, the non-linear sequence transformations tend to be more powerful.

Overview

Two classical techniques for series acceleration are Euler's transformation of series [1] and Kummer's transformation of series. [2] A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.

For alternating series, several powerful techniques, offering convergence rates from all the way to for a summation of terms, are described by Cohen et al. [3]

Euler's transform

A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by

where is the forward difference operator, for which one has the formula

If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.

A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation. [4]

Conformal mappings

A series

can be written as f(1), where the function f is defined as

The function f(z) can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point z = 1 is close to or on the boundary of the disk of convergence, the series for S will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to z = 1 ends up deeper in the new disk of convergence.

The conformal transform needs to be chosen such that , and one usually chooses a function that has a finite derivative at w = 0. One can assume that without loss of generality, as one can always rescale w to redefine . We then consider the function

Since , we have f(1) = g(1). We can obtain the series expansion of g(w) by putting in the series expansion of f(z) because ; the first n terms of the series expansion for f(z) will yield the first n terms of the series expansion for g(w) if . Putting w = 1 in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.

Non-linear sequence transformations

Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.

Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.

Aitken method

A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,

defined by

This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.

See also

Related Research Articles

In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.

<span class="mw-page-title-main">Riemann mapping theorem</span>

In complex analysis, the Riemann mapping theorem states that if is a non-empty simply connected open subset of the complex number plane which is not all of , then there exists a biholomorphic mapping from onto the open unit disk

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.

In mathematics, a Dirichlet series is any series of the form

In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function.

In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.

In mathematics, the Leibniz formula for π, named after Gottfried Wilhelm Leibniz, states that

In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.

In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm.

In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

In the mathematical field of numerical ordinary differential equations, a geometric integrator is a numerical method that preserves geometric properties of the exact flow of a differential equation.

In mathematics, a sequence transformation is an operator acting on a given space of sequences. Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.

In mathematics, Borel summation is a summation method for divergent series, introduced by Émile Borel (1899). It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several variations of this method that are also called Borel summation, and a generalization of it called Mittag-Leffler summation.

In mathematical analysis, a Banach limit is a continuous linear functional defined on the Banach space of all bounded complex-valued sequences such that for all sequences , in , and complex numbers :

  1. (linearity);
  2. if for all , then (positivity);
  3. , where is the shift operator defined by (shift-invariance);
  4. if is a convergent sequence, then .

In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.

Peter Wynn (1931—2017) was an English mathematician. His main achievements concern approximation theory – in particular the theory of Padé approximants – and its application in numerical methods for improving the rate of convergence of sequences of real numbers.

In geometry, a valuation is a finitely additive function from a collection of subsets of a set to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of Other examples of valuations on finite unions of convex bodies of are surface area, mean width, and Euler characteristic.

Michela Redivo-Zaglia is an Italian numerical analyst known for her works on numerical linear algebra and on extrapolation-based acceleration of numerical methods. She is an associate professor in the department of mathematics at the University of Padua.

References

  1. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.27". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN   978-0-486-61272-0. LCCN   64-60036. MR   0167642. LCCN   65-12253.
  2. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN   978-0-486-61272-0. LCCN   64-60036. MR   0167642. LCCN   65-12253.
  3. Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier, "Convergence Acceleration of Alternating Series", Experimental Mathematics, 9:1 (2000) page 3.
  4. William H. Press, et al., Numerical Recipes in C, (1987) Cambridge University Press, ISBN   0-521-43108-5 (See section 5.1).