Aitken's delta-squared process

Last updated

In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. [1] It is most useful for accelerating the convergence of a sequence that is converging linearly. A precursor form was known to Seki Kōwa (1642 – 1708) and applied to the rectification of the circle, i.e., to the calculation of π.

Contents

Definition

Given a sequence with Aitken's delta-squared process associates to this sequence the new sequence

which can also be written as

with and Both are the same sequence algebraically but the latter has improved numerical stability in computational implementation.

is ill-defined if the sequence contains a zero element, which occurs if the sequence of forward differences, has any repeated term. From a theoretical point of view, if that occurs only for a finite number of indices, one could apply the Aitken process to only the part of the sequence with indices such that is the last index for which the sequence repeats. In practice, the first few terms of the sequence usually provide desired precision; also, when numerically computing the sequence, one has to take care to stop the computation before rounding errors in the denominator become too large, as the sequence transformation may cancel significant digits.

Properties

Aitken's delta-squared process is an acceleration of convergence method and a particular case of a nonlinear sequence transformation.

A sequence that converges to a limiting value is said to converge linearly, or more technically Q-linearly, if there is some number for which

This means that asymptotically, the distance between the sequence and its limit shrinks by nearly the same proportion, on every step and the ratio of reduction becomes closer and closer to that proportion. This is also sometimes called "geometric convergence," since it is a characteristic property for geometric series, or "exponential convergence," since it is convergence like

Aitken's method will accelerate the convergence of a sequence if with terms defined above, satisfies

is not a linear operator on sequences, but it is linear with respect to addition of constant sequences: if is any constant sequence , constant for all This is clear from the expression of in terms of the finite difference operator

The new process does not in general converge quadratically, but for an iterated function sequence satisfying for some function converging to a fixed point, the accelerated sequence's convergence is quadratic. In this case, the technique is known as Steffensen's method.

Empirically, the A-operation eliminates the "most important error term". One can check this by considering a sequence of the form , where : The sequence will then go to the limit like goes to zero.

Geometrically, the graph of an exponential function that satisfies , and has an horizontal asymptote at (if ).

One can also show that if a sequence converges to its limit at a rate strictly greater than 1, does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 (respectively 100) correct decimal places after 5 (respectively 7) iterations (starting with 1 correct digit); usually no acceleration is needed in that case.)

In practice, often converges much faster to the limit than does, as demonstrated by the example calculations below. Usually, it is much cheaper to calculate (involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence . Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the differences in the numerator and denominator of the expression.

Example calculations

Example 1: The value of can be approximated by assuming an initial value for and iterating the following sequence, called Heron's method: Starting with

nXA[X]
011.4285714
11.51.4141414
21.41666671.4142136
31.4142157--
41.4142136--

It is worth noting here that Aitken's method does not save the cost of calculating two iterations here; computation of the first three values required the first five values. Also, the second value is less accurate than the 4th value, which is not surprising due to the fact that Aitken's process is best suited for sequences that converge linearly, rather than quadratically, and Heron's method for calculating square roots converges quadratically.[ citation needed ]

Example 2: The value of may be calculated as an infinite sum via the Leibniz formula for π:

nSeries TermsX = Partial SumsA[X]
0110.79166667
10.333333330.666666670.78333333
20.20.866666670.78630952
30.142857140.723809520.78492063
40.111111110.834920630.78567821
59.0909091×1020.744011540.78522034
67.6923077×1020.820934620.78551795
7-6.6666667×1020.75426795--
85.8823529×1020.81309148--

In this example, Aitken's method is applied to a sublinearly converging series and accelerates convergence considerably. The convergence is still sublinear, but much faster than the original convergence: the first value, whose computation required the first three values, is closer to the limit than the eighth value.

Example pseudocode for Aitken extrapolation

The following is an example of using the Aitken extrapolation to help find the limit of the sequence when given some initial where the limit of this sequence is assumed to be a fixed point (say ). For instance, if the sequence is given by with starting point then the function will be which has as a fixed point (see Methods of computing square roots); it is this fixed point whose value will be approximated.

This pseudo code also computes the Aitken approximation to . The Aitken extrapolates will be denoted by aitkenX. During the computation of the extrapolate, it is important to check if the denominator becomes too small, which could happen if we already have a large amount of accuracy; without this check, a large amount of error could be introduced by the division. This small number will be denoted by epsilon. Because the binary representation of the fixed point could be infinite (or at least too large to fit in the available memory), the calculation will stop once the approximation is within tolerance of the true value.

%These choices depend on the problem being solvedx0=1%The initial valuef(x)=(1/2)*(x+2/x)%The function that finds the next element in the sequencetolerance=10^-10%10 digit accuracy is desiredepsilon=10^-16%Do not divide by a number smaller than thismaxIterations=20%Do not allow the iterations to continue indefinitelyhaveWeFoundSolution=false%Were we able to find the solution to within the desired tolerance? not yetfori=1:maxIterationsx1=f(x0)x2=f(x1)if(x1~=x0)lambda=absoluteValue((x2-x1)/(x1-x0))%OPTIONAL: Computes an approximation of |f'(fixedPoint)|, which is denoted by lambdaenddenominator=(x2-x1)-(x1-x0);if(absoluteValue(denominator)<epsilon)%To avoid greatly increasing error, do not divide by too small of a numberprint('WARNING: denominator is too small')break%Leave the loopendaitkenX=x2-((x2-x1)^2)/denominatorif(absoluteValue(aitkenX-x2)<tolerance)%If the value is within toleranceprint("The fixed point is ",aitkenX))%Display the result of the Aitken extrapolationhaveWeFoundSolution=truebreak%Done, so leave the loopendx0=aitkenX%Update x0 to start again                  endif(haveWeFoundSolution==false)%If we were not able to find a solution to within the desired toleranceprint("Warning: Not able to find solution to within the desired tolerance of ",tolerance)print("The last computed extrapolate was ",aitkenX)end

See also

Notes

  1. Aitken, Alexander (1926). "On Bernoulli's numerical solution of algebraic equations". Proceedings of the Royal Society of Edinburgh. 46: 289–305. doi:10.1017/S0370164600022070.

Related Research Articles

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

Bresenham's line algorithm is a line drawing algorithm that determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw line primitives in a bitmap image, as it uses only integer addition, subtraction, and bit shifting, all of which are very cheap operations in historically common computer architectures. It is an incremental error algorithm, and one of the earliest algorithms developed in the field of computer graphics. An extension to the original algorithm called the midpoint circle algorithm may be used for drawing circles.

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

<span class="mw-page-title-main">Riemann sum</span> Approximation technique in integral calculus

In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations.

In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.

<span class="mw-page-title-main">Secant method</span> Root-finding method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.

In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. In the most common applications, a sequence that converges to is said to have order of convergence and rate of convergence if

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.

In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.

The difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets. From a mathematical perspective, the difference-map algorithm is a dynamical system based on a mapping of Euclidean space. Solutions are encoded as fixed points of the mapping.

In numerical analysis, Steffensen's method is an iterative method for root-finding named after Johan Frederik Steffensen which is similar to Newton's method, but with certain situational advantages. In particular, Steffensen's method achieves similar quadratic convergence, but without using derivatives, as required for Newton's method.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.

<span class="mw-page-title-main">Plotting algorithms for the Mandelbrot set</span> Algorithms and methods of plotting the Mandelbrot set on a computing device

There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.

References