Convergent matrix

Last updated

In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation.

Contents

Background

When successive powers of a matrix T become small (that is, when all of the entries of T approach zero, upon raising T to successive powers), the matrix T converges to the zero matrix. A regular splitting of a non-singular matrix A results in a convergent matrix T. A semi-convergent splitting of a matrix A results in a semi-convergent matrix T. A general iterative method converges for every initial vector if T is convergent, and under certain conditions if T is semi-convergent.

Definition

We call an n×n matrix T a convergent matrix if

 

 

 

 

(1)

for each i = 1, 2, ..., n and j = 1, 2, ..., n. [1] [2] [3]

Example

Let

Computing successive powers of T, we obtain

and, in general,

Since

and

T is a convergent matrix. Note that ρ(T) = 1/4, where ρ(T) represents the spectral radius of T, since 1/4 is the only eigenvalue of T.

Characterizations

Let T be an n×n matrix. The following properties are equivalent to T being a convergent matrix:

  1. for some natural norm;
  2. for all natural norms;
  3. ;
  4. for every x. [4] [5] [6] [7]

Iterative methods

A general iterative method involves a process that converts the system of linear equations

 

 

 

 

(2)

into an equivalent system of the form

 

 

 

 

(3)

for some matrix T and vector c. After the initial vector x(0) is selected, the sequence of approximate solution vectors is generated by computing

 

 

 

 

(4)

for each k 0. [8] [9] For any initial vector x(0), the sequence defined by ( 4 ), for each k 0 and c 0, converges to the unique solution of ( 3 ) if and only if ρ(T) < 1, that is, T is a convergent matrix. [10] [11]

Regular splitting

A matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. In the system of linear equations ( 2 ) above, with A non-singular, the matrix A can be split, that is, written as a difference

 

 

 

 

(5)

so that ( 2 ) can be re-written as ( 4 ) above. The expression ( 5 ) is a regular splitting of A if and only if B10 and C0, that is, B1 and C have only nonnegative entries. If the splitting ( 5 ) is a regular splitting of the matrix A and A10, then ρ(T) < 1 and T is a convergent matrix. Hence the method ( 4 ) converges. [12] [13]

Semi-convergent matrix

We call an n×n matrix T a semi-convergent matrix if the limit

 

 

 

 

(6)

exists. [14] If A is possibly singular but ( 2 ) is consistent, that is, b is in the range of A, then the sequence defined by ( 4 ) converges to a solution to ( 2 ) for every x(0) if and only if T is semi-convergent. In this case, the splitting ( 5 ) is called a semi-convergent splitting of A. [15]

See also

Notes

Related Research Articles

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.

Taylor series Mathematical approximation of a function

In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. If 0 is the point where the derivatives are considered, a Taylor series is also called a Maclaurin series, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the mid 1700s.

Multivariate normal distribution Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In mathematics, the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.

In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.

In mathematics, the spectral radius of a square matrix or a bounded linear operator is the largest absolute value of its eigenvalues. It is sometimes denoted by ρ(·).

Euler equations (fluid dynamics) Set of quasilinear hyperbolic equations governing adiabatic and inviscid flow

In fluid dynamics, the Euler equations are a set of quasilinear partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity.

In mathematics, the ratio test is a test for the convergence of a series

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

In mathematical analysis, Cesàro summation assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series.

In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit.

A Neumann series is a mathematical series of the form

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.

In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.

In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.

References