Toeplitz matrix

Last updated

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

Contents

Any matrix of the form

is a Toeplitz matrix. If the element of is denoted then we have

A Toeplitz matrix is not necessarily square.

Solving a Toeplitz system

A matrix equation of the form

is called a Toeplitz system if is a Toeplitz matrix. If is an Toeplitz matrix, then the system has at most only unique values, rather than . We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case.

Toeplitz systems can be solved by algorithms such as the Schur algorithm or the Levinson algorithm in time. [1] [2] Variants of the latter have been shown to be weakly stable (i.e. they exhibit numerical stability for well-conditioned linear systems). [3] The algorithms can also be used to find the determinant of a Toeplitz matrix in time. [4]

A Toeplitz matrix can also be decomposed (i.e. factored) in time. [5] The Bareiss algorithm for an LU decomposition is stable. [6] An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant.

General properties

where is the lower triangular part of .
where and are lower triangular Toeplitz matrices and is a strictly lower triangular matrix. [7]

Discrete convolution

The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution of and can be formulated as:

This approach can be extended to compute autocorrelation, cross-correlation, moving average etc.

Infinite Toeplitz matrix

A bi-infinite Toeplitz matrix (i.e. entries indexed by ) induces a linear operator on .

The induced operator is bounded if and only if the coefficients of the Toeplitz matrix are the Fourier coefficients of some essentially bounded function .

In such cases, is called the symbol of the Toeplitz matrix , and the spectral norm of the Toeplitz matrix coincides with the norm of its symbol. The proof is easy to establish and can be found as Theorem 1.1 of. [8]

See also

Notes

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Matrix addition</span> Notions of sums for matrices in linear algebra

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

In linear algebra, a nilpotent matrix is a square matrix N such that

In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.


In mathematics, a Bunce–Deddens algebra, named after John W. Bunce and James A. Deddens, is a certain type of AT algebra, a direct limit of matrix algebras over the continuous functions on the circle, in which the connecting maps are given by embeddings between families of shift operators with periodic weights.

<span class="mw-page-title-main">Generalized pencil-of-function method</span>

Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.

References

Further reading