Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article.
In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
For iterative methods, a sequence that converges to is said to have asymptotic order of convergence and asymptotic rate of convergence if
Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. [1] The rate of convergence may also be called the asymptotic error constant, and some authors will use rate where this article uses order. [2] Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also.
Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions of some problem that converges to a true solution with a corresponding sequence of regular grid spacings that converge to 0 is said to have asymptotic order of convergence and asymptotic rate of convergence if
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order power of a discretization scale parameter below.
In general, comparatively, one sequence that converges to a limit is said to asymptotically converge more quickly than another sequence that converges to a limit if
and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis.
Suppose that the sequence of iterates of an iterative method converges to the limit number as . The sequence is said to converge with order to and with a rate of convergence if the limit of quotients of absolute differences of sequential iterates from their limit satisfies
for some positive constant if and if . [1] [3] [4] Other more technical rate definitions are needed if the sequence converges but [5] or the limit does not exist. [1] This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed. § R-convergence, below, is an appropriate alternative when this limit does not exist.
Sequences with larger orders converge more quickly than those with smaller order, and those with smaller rates converge more quickly than those with larger rates for a given order. This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive. Therefore it is also common to define as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1. [1]
Integer powers of are common and are given common names. Convergence with order and is called linear convergence and the sequence is said to converge linearly to . Convergence with and any is called quadratic convergence and the sequence is said to converge quadratically. Convergence with and any is called cubic convergence. However, it is not necessary that be an integer. For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618. [6]
The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies These are linear, quadratic, and cubic polynomial expressions when is 1, 2, and 3, respectively. More precisely, the limits imply the leading order error is exactly which can be expressed using asymptotic small o notation as
In general, when for a sequence or for any sequence that satisfies those sequences are said to converge superlinearly (i.e., faster than linearly). [1] A sequence is said to converge sublinearly (i.e., slower than linearly) if it converges and Importantly, it is incorrect to say that these sublinear-order sequences converge linearly with an asymptotic rate of convergence of 1. A sequence converges logarithmically to if the sequence converges sublinearly and also [5]
The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example detailed below (where is the floor function applied to ). The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit.
In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root." [1] [7] : 620 A sequence that converges to is said to converge at least R-linearly if there exists an error-bounding sequence such that and converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on. [1]
Any error bounding sequence provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders converge more quickly and those with smaller rates converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible and the smallest possible given that .
For the example given above, the tight bounding sequence converges Q-linearly with rate 1/2, so converges R-linearly with rate 1/2. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate These examples demonstrate why the "R" in R-linear convergence is short for "root."
The geometric progression converges to . Plugging the sequence into the definition of Q-linear convergence (i.e., order of convergence 1) shows that
Thus converges Q-linearly with a convergence rate of ; see the first plot of the figure below.
More generally, for any initial value in the real numbers and a real number common ratio between -1 and 1, a geometric progression converges linearly with rate and the sequence of partial sums of a geometric series also converges linearly with rate . The same holds also for geometric progressions and geometric series parameterized by any complex numbers
The staggered geometric progression using the floor function that gives the largest integer that is less than or equal to converges R-linearly to 0 with rate 1/2, but it does not converge Q-linearly; see the second plot of the figure below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate these examples demonstrate why the "R" in R-linear convergence is short for "root."
The sequence converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below.
Finally, the sequence converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below.
Recurrent sequences , called fixed point iterations, define discrete time autonomous dynamical systems and have important general applications in mathematics through various fixed-point theorems about their convergence behavior. When f is continuously differentiable, given a fixed point p, such that , the fixed point is an attractive fixed point and the recurrent sequence will converge at least linearly to p for any starting value sufficiently close to p. If and , then the recurrent sequence will converge at least quadratically, and so on. If , then the fixed point is a repulsive fixed point and sequences cannot converge to p from its immediate neighborhoods, though they may still jump to p directly from outside of its local neighborhoods.
A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to the order : [8]
For numerical approximation of an exact value through a numerical method of order see. [9]
Many methods exist to accelerate the convergence of a given sequence, i.e., to transform one sequence into a second sequence that converges more quickly to the same limit. Such techniques are in general known as "series acceleration" methods. These may reduce the computational costs of approximating the limits of the original sequences. One example of series acceleration by sequence transformation is Aitken's delta-squared process. These methods in general, and in particular Aitken's method, do not typically increase the order of convergence and thus they are useful only if initially the convergence is not faster than linear: if converges linearly, Aitken's method transforms it into a sequence that still converges linearly (except for pathologically designed special cases), but faster in the sense that . On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement.
This section needs additional citations for verification .(August 2020) |
A sequence of discretized approximations of some continuous-domain function that converges to this target, together with a corresponding sequence of discretization scale parameters that converge to 0, is said to have asymptotic order of convergence and asymptotic rate of convergence if
for some positive constants and and using to stand for an appropriate distance metric on the space of solutions, most often either the uniform norm, the absolute difference, or the Euclidean distance. Discretization scale parameters may be spacings of a regular grid in space or in time, the inverse of the number of points of a grid in one dimension, an average or maximum distance between points in a polygon mesh, the single-dimension spacings of an irregular sparse grid, or a characteristic quantum of energy or momentum in a quantum mechanical basis set.
When all the discretizations are generated using a single common method, it is common to discuss the asymptotic rate and order of convergence for the method itself rather than any particular discrete sequences of discretized solutions. In these cases one considers a single abstract discretized solution generated using the method with a scale parameter and then the method is said to have asymptotic order of convergence and asymptotic rate of convergence if
again for some positive constants and and an appropriate metric This implies that the error of a discretization asymptotically scales like the discretization's scale parameter to the power, or using asymptotic big O notation. More precisely, it implies the leading order error is which can be expressed using asymptotic small o notation as
In some cases multiple rates and orders for the same method but with different choices of scale parameter may be important, for instance for finite difference methods based on multidimensional grids where the different dimensions have different grid spacings or for finite element methods based on polygon meshes where choosing either average distance between mesh points or maximum distance between mesh points as scale parameters may imply different orders of convergence. In some especially technical contexts, discretization methods' asymptotic rates and orders of convergence will be characterized by several scale parameters at once with the value of each scale parameter possibly affecting the asymptotic rate and order of convergence of the method with respect to the other scale parameters.
Consider the ordinary differential equation
with initial condition . We can approximate a solution to this one-dimensional equation using a sequence applying the forward Euler method for numerical discretization using any regular grid spacing and grid points indexed by as follows:
which implies the first-order linear recurrence with constant coefficients
Given , the sequence satisfying that recurrence is the geometric progression
The exact analytical solution to the differential equation is , corresponding to the following Taylor expansion in :
Therefore the error of the discrete approximation at each discrete point is
For any specific , given a sequence of forward Euler approximations , each using grid spacings that divide so that , one has
for any sequence of grids with successively smaller grid spacings . Thus converges to pointwise with a convergence order and asymptotic error constant at each point Similarly, the sequence converges uniformly with the same order and with rate on any bounded interval of , but it does not converge uniformly on the unbounded set of all positive real values,
In asymptotic analysis in general, one sequence that converges to a limit is said to asymptotically converge to with a faster order of convergence than another sequence that converges to in a shared metric space with distance metric such as the real numbers or complex numbers with the ordinary absolute difference metrics, if
the two are said to asymptotically converge to with the same order of convergence if
for some positive finite constant and the two are said to asymptotically converge to with the same rate and order of convergence if
These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis. [10] [11] For the first two of these there are associated expressions in asymptotic O notation: the first is that in small o notation [12] and the second is that in Knuth notation. [13] The third is also called asymptotic equivalence, expressed [14] [15]
For any two geometric progressions and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if The convergence of any geometric series to its limit has error terms that are equal to a geometric progression, so similar relationships hold among geometric series as well. Any sequence that is asymptotically equivalent to a convergent geometric sequence may be either be said to "converge geometrically" or "converge exponentially" with respect to the absolute difference from its limit, or it may be said to "converge linearly" relative to a logarithm of the absolute difference such as the "number of decimals of precision." The latter is standard in numerical analysis.
For any two sequences of elements proportional to an inverse power of and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if
For any sequence with a limit of zero, its convergence can be compared to the convergence of the shifted sequence rescalings of the shifted sequence by a constant and scaled -powers of the shifted sequence, These comparisons are the basis for the Q-convergence classifications for iterative numerical methods as described above: when a sequence of iterate errors from a numerical method is asymptotically equivalent to the shifted, exponentiated, and rescaled sequence of iterate errors it is said to converge with order and rate
This section needs additional citations for verification .(October 2024) |
Non-asymptotic rates of convergence do not have the common, standard definitions that asymptotic rates of convergence have. Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior.
For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit. The non-asymptotic rate is then an inverse of that number of iterates or computer time. In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower. These rates will generally be different for different starting points and different error thresholds for defining the neighborhoods. It is most common to discuss summaries of statistical distributions of these single point rates corresponding to distributions of possible starting points, such as the "average non-asymptotic rate," the "median non-asymptotic rate," or the "worst-case non-asymptotic rate" for some method applied to some problem with some fixed error threshold. These ensembles of starting points can be chosen according to parameters like initial distance from the eventual limit in order to define quantities like "average non-asymptotic rate of convergence from a given distance."
For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common. For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error. In practical applications, when one discretization method gives a desired accuracy with a larger discretization scale parameter than another it will often be said to converge faster than the other, even if its eventual asymptotic convergence is slower.
In mathematics, a geometric series is a series in which the ratio of successive adjacent terms is constant. In other words, the sum of consecutive terms of a geometric sequence forms a geometric series. Each term is therefore the geometric mean of its two neighbouring terms, similar to how the terms in an arithmetic series are the arithmetic means of their two neighbouring terms.
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.
In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.
In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists and is finite, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. It is most useful for accelerating the convergence of a sequence that is converging linearly. A precursor form was known to Seki Kōwa and applied to the rectification of the circle, i.e., to the calculation of π.
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.
In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the X axis. The Lebesgue integral, named after French mathematician Henri Lebesgue, is one way to make this concept rigorous and to extend it to more general functions.
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables.
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity.
{{cite web}}
: CS1 maint: url-status (link){{cite book}}
: CS1 maint: date and year (link){{cite book}}
: CS1 maint: date and year (link)