In mathematics, a **sequence transformation** is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.

Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others.

For a given sequence

the **transformed sequence** is

where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e.

for some which often depends on (cf. e.g. Binomial transform). In the simplest case, the and the are real or complex numbers. More generally, they may be elements of some vector space or algebra.

In the context of acceleration of convergence, the transformed sequence is said to **converge faster** than the original sequence if

where is the limit of , assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit .

If the mapping is linear in each of its arguments, i.e., for

for some constants (which may depend on *n*), the sequence transformation is called a **linear sequence transformation**. Sequence transformations that are not linear are called nonlinear sequence transformations.

Simplest examples of (linear) sequence transformations include shifting all elements, (resp. = 0 if *n* + *k* < 0) for a fixed *k*, and scalar multiplication of the sequence.

A less trivial example would be the discrete convolution with a fixed sequence. A particularly basic form is the difference operator, which is convolution with the sequence and is a discrete analog of the derivative. The binomial transform is another linear transformation of a still more general type.

An example of a nonlinear sequence transformation is Aitken's delta-squared process, used to improve the rate of convergence of a slowly convergent sequence. An extended form of this is the Shanks transformation. The Möbius transform is also a nonlinear transformation, only possible for integer sequences.

In mathematics, more specifically in functional analysis, a **Banach space** is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space.

In mathematics, the **discrete Fourier transform** (**DFT**) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

In computational mathematics, an **iterative method** is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the *n*-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called **convergent** if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.

In mathematics, a **sequence** is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members. The number of elements is called the *length* of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers, or the set of the first *n* natural numbers. Sequences are one type of indexed families as an indexed family is defined as a function which domain is called the index set, and the elements of the index set are the indices for the elements of the function image.

In mathematics, a **Fourier series** is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a synthesis of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving weights that describe a given function is a form of Fourier analysis. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.

In mathematics, a **generating function** is a way of encoding an infinite sequence of numbers (*a*_{n}) by treating them as the coefficients of a formal power series. This series is called the generating function of the sequence. Unlike an ordinary series, the *formal* power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.

In mathematics and physical science, **spherical harmonics** are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields.

In geometry and complex analysis, a **Möbius transformation** of the complex plane is a rational function of the form

In mathematics, more specifically in mathematical analysis, the **Cauchy product** is the discrete convolution of two infinite series. It is named after the French mathematician Augustin Louis Cauchy.

In mathematics, a **divergent series** is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.

In linear algebra, a **circulant matrix** is a square matrix in which each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In numerical analysis, the **order of convergence** and the **rate of convergence** of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have *order of convergence* and *rate of convergence* if

In functional analysis and related areas of mathematics, a **sequence space** is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field *K* of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in *K*, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.

In combinatorics, the **binomial transform** is a sequence transformation that computes its forward differences. It is closely related to the **Euler transform**, which is the result of applying the binomial transform to the sequence associated with its ordinary generating function.

**Ewald summation**, named after Paul Peter Ewald, is a method for computing long-range interactions in periodic systems. It was first developed as the method for calculating electrostatic energies of ionic crystals, and is now commonly used for calculating long-range interactions in computational chemistry. Ewald summation is a special case of the Poisson summation formula, replacing the summation of interaction energies in real space with an equivalent summation in Fourier space. In this method, the long-range interaction is divided into two parts: a short-range contribution, and a long-range contribution which does not have a singularity. The short-range contribution is calculated in real space, whereas the long-range contribution is calculated using a Fourier transform. The advantage of this method is the rapid convergence of the energy compared with that of a direct summation. This means that the method has high accuracy and reasonable speed when computing long-range interactions, and it is thus the de facto standard method for calculating long-range interactions in periodic systems. The method requires charge neutrality of the molecular system in order to accurately calculate the total Coulombic interaction. A study of the truncation errors introduced in the energy and force calculations of disordered point-charge systems is provided by Kolafa and Perram.

In numerical analysis, **Aitken's delta-squared process** or **Aitken Extrapolation** is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

In mathematics, **series acceleration** is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

In numerical analysis, the **Shanks transformation** is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.

In numerical linear algebra, a **convergent matrix** is a matrix that converges to the zero matrix under matrix exponentiation.

- Hugh J. Hamilton, "Mertens' Theorem and Sequence Transformations", AMS (1947)

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.