In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:
Any matrix of the form
is a Toeplitz matrix. If the element of is denoted then we have
A Toeplitz matrix is not necessarily square.
A matrix equation of the form
is called a Toeplitz system if is a Toeplitz matrix. If is an Toeplitz matrix, then the system has at most only unique values, rather than . We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case.
Toeplitz systems can be solved by algorithms such as the Schur algorithm or the Levinson algorithm in time. [1] [2] Variants of the latter have been shown to be weakly stable (i.e. they exhibit numerical stability for well-conditioned linear systems). [3] The algorithms can also be used to find the determinant of a Toeplitz matrix in time. [4]
A Toeplitz matrix can also be decomposed (i.e. factored) in time. [5] The Bareiss algorithm for an LU decomposition is stable. [6] An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant.
The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution of and can be formulated as:
This approach can be extended to compute autocorrelation, cross-correlation, moving average etc.
A bi-infinite Toeplitz matrix (i.e. entries indexed by ) induces a linear operator on .
The induced operator is bounded if and only if the coefficients of the Toeplitz matrix are the Fourier coefficients of some essentially bounded function .
In such cases, is called the symbol of the Toeplitz matrix , and the spectral norm of the Toeplitz matrix coincides with the norm of its symbol. The proof is easy to establish and can be found as Theorem 1.1 of. [8]
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.
In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.
In linear algebra, a nilpotent matrix is a square matrix N such that
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.
In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In mathematics, a Bunce–Deddens algebra, named after John W. Bunce and James A. Deddens, is a certain type of AT algebra, a direct limit of matrix algebras over the continuous functions on the circle, in which the connecting maps are given by embeddings between families of shift operators with periodic weights.
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.