In linear algebra, a **Toeplitz matrix** or **diagonal-constant matrix**, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

- Solving a Toeplitz system
- General properties
- Discrete convolution
- Infinite Toeplitz matrix
- See also
- Notes
- References
- Further reading

Any matrix of the form

is a **Toeplitz matrix**. If the element of is denoted then we have

A Toeplitz matrix is not necessarily square.

A matrix equation of the form

is called a **Toeplitz system** if is a Toeplitz matrix. If is an Toeplitz matrix, then the system has at-most only unique values, rather than . We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case.

Toeplitz systems can be solved by algorithms such as the Schur algorithm or the Levinson algorithm in time.^{ [1] }^{ [2] } Variants of the latter have been shown to be weakly stable (i.e. they exhibit numerical stability for well-conditioned linear systems).^{ [3] } The algorithms can also be used to find the determinant of a Toeplitz matrix in time.^{ [4] }

A Toeplitz matrix can also be decomposed (i.e. factored) in time.^{ [5] } The Bareiss algorithm for an LU decomposition is stable.^{ [6] } An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant.

- An Toeplitz matrix may be defined as a matrix where , for constants . The set of Toeplitz matrices is a subspace of the vector space of matrices (under matrix addition and scalar multiplication).
- Two Toeplitz matrices may be added in time (by storing only one value of each diagonal) and multiplied in time.
- Toeplitz matrices are persymmetric. Symmetric Toeplitz matrices are both centrosymmetric and bisymmetric.
- Toeplitz matrices are also closely connected with Fourier series, because the multiplication operator by a trigonometric polynomial, compressed to a finite-dimensional space, can be represented by such a matrix. Similarly, one can represent linear convolution as multiplication by a Toeplitz matrix.
- Toeplitz matrices commute asymptotically. This means they diagonalize in the same basis when the row and column dimension tends to infinity.

- For symmetric Toeplitz matrices, there is the decomposition

- where is the lower triangular part of .

- The inverse of a nonsingular symmetric Toeplitz matrix has the representation

- where and are lower triangular Toeplitz matrices and is a strictly lower triangular matrix.
^{ [7] }

The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution of and can be formulated as:

This approach can be extended to compute autocorrelation, cross-correlation, moving average etc.

A bi-infinite Toeplitz matrix (i.e. entries indexed by ) induces a linear operator on .

The induced operator is bounded if and only if the coefficients of the Toeplitz matrix are the Fourier coefficients of some essentially bounded function .

In such cases, is called the **symbol** of the Toeplitz matrix , and the spectral norm of the Toeplitz matrix coincides with the norm of its symbol. The proof is easy to establish and can be found as Theorem 1.1 of: ^{ [8] }

- Circulant matrix, a square Toeplitz matrix with the additional property that
- Hankel matrix, an "upside down" (i.e., row-reversed) Toeplitz matrix
- Szegő limit theorems

- ↑ Press et al. 2007 , §2.8.2—Toeplitz matrices
- ↑ Hayes 1996 , Chapter 5.2.6
- ↑ Krishna & Wang 1993
- ↑ Monahan 2011 , §4.5—Toeplitz systems
- ↑ Brent 1999
- ↑ Bojanczyk et al. 1995
- ↑ Mukherjee & Maiti 1988
- ↑ Böttcher & Grudsky 2012

In mathematics, the **determinant** is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix *A* is commonly denoted det(*A*), det *A*, or |*A*|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (which follows directly from the above properties).

In mathematics, **matrix addition** is the operation of adding two matrices by adding the corresponding entries together.

In mathematics, particularly in linear algebra, **matrix multiplication** is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the **matrix product**, has the number of rows of the first and the number of columns of the second matrix. The product of matrices **A** and **B** is denoted as **AB**.

In linear algebra, an n-by-n square matrix **A** is called **invertible**, if there exists an n-by-n square matrix **B** such that

**Levinson recursion** or **Levinson–Durbin recursion** is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(*n*^{2}) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(*n*^{3}).

In linear algebra, a **Hankel matrix**, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:

In mathematics, a **block matrix** or a **partitioned matrix** is a matrix that is *interpreted* as having been broken into sections called **blocks** or **submatrices**. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a **tridiagonal matrix** is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In numerical linear algebra, a **Givens rotation** is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne National Laboratory.

In mathematics, the **Kronecker product**, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called **matrix direct product**.

In linear algebra, a **circulant matrix** is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In numerical linear algebra, the **Arnoldi iteration** is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

In linear algebra, a **nilpotent matrix** is a square matrix *N* such that

In mathematics, particularly matrix theory, a **band matrix** or **banded matrix** is a sparse matrix whose non-zero entries are confined to a diagonal *band*, comprising the main diagonal and zero or more diagonals on either side.

In mathematics, **matrix calculus** is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics, a **logarithm of a matrix** is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical analysis and linear algebra, **lower–upper** (**LU**) **decomposition** or **factorization** factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It's also referred to as **LR** decomposition.

In mathematics, every analytic function can be used for defining a **matrix function** that maps square matrices with complex entries to square matrices of the same size.

In mathematics, a **matrix** is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In mathematics, a **Bunce–Deddens algebra**, named after John W. Bunce and James A. Deddens, is a certain type of **AT algebra**, a direct limit of matrix algebras over the continuous functions on the circle, in which the connecting maps are given by embeddings between families of shift operators with periodic weights.

- Bojanczyk, A. W.; Brent, R. P.; de Hoog, F. R.; Sweet, D. R. (1995), "On the stability of the Bareiss and related Toeplitz factorization algorithms",
*SIAM Journal on Matrix Analysis and Applications*,**16**: 40–57, arXiv: 1004.5510 , doi:10.1137/S0895479891221563, S2CID 367586 - Böttcher, Albrecht; Grudsky, Sergei M. (2012),
*Toeplitz Matrices, Asymptotic Linear Algebra, and Functional Analysis*, Birkhäuser, ISBN 978-3-0348-8395-5 - Brent, R. P. (1999), "Stability of fast algorithms for structured linear systems", in Kailath, T.; Sayed, A. H. (eds.),
*Fast Reliable Algorithms for Matrices with Structure*, SIAM, pp. 103–116, doi:10.1137/1.9781611971354.ch4, hdl: 1885/40746 , S2CID 13905858 - Chan, R. H.-F.; Jin, X.-Q. (2007),
*An Introduction to Iterative Toeplitz Solvers*, SIAM, doi:10.1137/1.9780898718850, ISBN 978-0-89871-636-8 - Chandrasekeran, S.; Gu, M.; Sun, X.; Xia, J.; Zhu, J. (2007), "A superfast algorithm for Toeplitz systems of linear equations",
*SIAM Journal on Matrix Analysis and Applications*,**29**(4): 1247–1266, CiteSeerX 10.1.1.116.3297 , doi:10.1137/040617200 - Chen, W. W.; Hurvich, C. M.; Lu, Y. (2006), "On the correlation matrix of the discrete Fourier transform and the fast solution of large Toeplitz systems for long-memory time series",
*Journal of the American Statistical Association*,**101**(474): 812–822, CiteSeerX 10.1.1.574.4394 , doi:10.1198/016214505000001069, S2CID 55893963 - Hayes, Monson H. (1996),
*Statistical digital signal processing and modeling*, John Wiley & Son, ISBN 0-471-59431-8 - Krishna, H.; Wang, Y. (1993), "The Split Levinson Algorithm is weakly stable",
*SIAM Journal on Numerical Analysis*,**30**(5): 1498–1508, doi:10.1137/0730078 - Monahan, J. F. (2011),
*Numerical Methods of Statistics*, Cambridge University Press - Mukherjee, Bishwa Nath; Maiti, Sadhan Samar (1988), "On some properties of positive definite Toeplitz matrices and their possible applications" (PDF),
*Linear Algebra and Its Applications*,**102**: 211–240, doi: 10.1016/0024-3795(88)90326-6 - Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007),
*Numerical Recipes: The Art of Scientific Computing*(Third ed.), Cambridge University Press, ISBN 978-0-521-88068-8 - Stewart, M. (2003), "A superfast Toeplitz solver with improved numerical stability",
*SIAM Journal on Matrix Analysis and Applications*,**25**(3): 669–693, doi:10.1137/S089547980241791X, S2CID 15717371 - Yang, Zai; Xie, Lihua; Stoica, Petre (2016), "Vandermonde decomposition of multilevel Toeplitz matrices with application to multidimensional super-resolution",
*IEEE Transactions on Information Theory*,**62**(6): 3685–3701, arXiv: 1505.02510 , doi:10.1109/TIT.2016.2553041, S2CID 6291005

- Bareiss, E. H. (1969), "Numerical solution of linear equations with Toeplitz and vector Toeplitz matrices",
*Numerische Mathematik*,**13**(5): 404–424, doi:10.1007/BF02163269, S2CID 121761517 - Goldreich, O.; Tal, A. (2018), "Matrix rigidity of random Toeplitz matrices",
*Computational Complexity*,**27**(2): 305–350, doi:10.1007/s00037-016-0144-9, S2CID 253641700 - Golub G. H., van Loan C. F. (1996),
*Matrix Computations*(Johns Hopkins University Press) §4.7—Toeplitz and Related Systems - Gray R. M.,
*Toeplitz and Circulant Matrices: A Review*(Now Publishers) doi : 10.1561/0100000006 - Noor, F.; Morgera, S. D. (1992), "Construction of a Hermitian Toeplitz matrix from an arbitrary set of eigenvalues",
*IEEE Transactions on Signal Processing*,**40**(8): 2093–2094, Bibcode:1992ITSP...40.2093N, doi:10.1109/78.149978 - Pan, Victor Y. (2001),
*Structured Matrices and Polynomials: unified superfast algorithms*, Birkhäuser, ISBN 978-0817642402 - Ye, Ke; Lim, Lek-Heng (2016), "Every matrix is a product of Toeplitz matrices",
*Foundations of Computational Mathematics*,**16**(3): 577–598, arXiv: 1307.5132 , doi:10.1007/s10208-015-9254-z, S2CID 254166943

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.