Tridiagonal matrix

Last updated

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:

Contents

The determinant of a tridiagonal matrix is given by the continuant of its elements. [1]

An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.

Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. [2] In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1ak+1,k 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix. [3]

The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space.

Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.

Determinant

The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation. [4] Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let

The sequence (fi) is called the continuant and satisfies the recurrence relation

with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.

Inversion

The inverse of a non-singular tridiagonal matrix T

is given by

where the θi satisfy the recurrence relation

with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy

with initial conditions ϕn+1 = 1 and ϕn = an. [5] [6]

Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal [7] or Toeplitz matrices [8] and for the general case as well. [9] [10]

In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa. [11]

Solution of linear system

A system of equations Ax = b for  can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations. [12]

Eigenvalues

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely: [13] [14]

A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero. [15] Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring operations for a matrix of size , although fast algorithms exist which (without parallel computation) require only . [16]

As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal. [17]

Similarity to symmetric tridiagonal matrix

For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation. Given a real tridiagonal, nonsymmetric matrix

where . Assume that each product of off-diagonal entries is strictly positive and define a transformation matrix by

The similarity transformation yields a symmetric tridiagonal matrix by: [18]

Note that and have the same eigenvalues.

Computer programming

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step. [19]

A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n 1 containing the subdiagonal and superdiagonal elements.

Applications

The discretization in space of the one-dimensional diffusion or heat equation

using second order central finite differences results in

with discretization constant . The matrix is tridiagonal with and . Note: no boundary conditions are used here.

See also

Notes

  1. Thomas Muir (1960). A treatise on the theory of determinants . Dover Publications. pp.  516–525.
  2. Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 28. ISBN   0521386322.
  3. Horn & Johnson, page 174
  4. El-Mikkawy, M. E. A. (2004). "On the inverse of a general tridiagonal matrix". Applied Mathematics and Computation. 150 (3): 669–679. doi:10.1016/S0096-3003(03)00298-4.
  5. Da Fonseca, C. M. (2007). "On the eigenvalues of some tridiagonal matrices". Journal of Computational and Applied Mathematics. 200: 283–286. doi: 10.1016/j.cam.2005.08.047 .
  6. Usmani, R. A. (1994). "Inversion of a tridiagonal jacobi matrix". Linear Algebra and its Applications. 212–213: 413–414. doi: 10.1016/0024-3795(94)90414-6 .
  7. Hu, G. Y.; O'Connell, R. F. (1996). "Analytical inversion of symmetric tridiagonal matrices". Journal of Physics A: Mathematical and General. 29 (7): 1511. doi:10.1088/0305-4470/29/7/020.
  8. Huang, Y.; McColl, W. F. (1997). "Analytical inversion of general tridiagonal matrices". Journal of Physics A: Mathematical and General. 30 (22): 7919. doi:10.1088/0305-4470/30/22/026.
  9. Mallik, R. K. (2001). "The inverse of a tridiagonal matrix". Linear Algebra and its Applications. 325: 109–139. doi: 10.1016/S0024-3795(00)00262-7 .
  10. Kılıç, E. (2008). "Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions". Applied Mathematics and Computation. 197: 345–357. doi:10.1016/j.amc.2007.07.046.
  11. Raf Vandebril; Marc Van Barel; Nicola Mastronardi (2008). Matrix Computations and Semiseparable Matrices. Volume I: Linear Systems. JHU Press. Theorem 1.38, p. 41. ISBN   978-0-8018-8714-7.
  12. Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). The Johns Hopkins University Press. ISBN   0-8018-5414-8.
  13. Noschese, S.; Pasquini, L.; Reichel, L. (2013). "Tridiagonal Toeplitz matrices: Properties and novel applications". Numerical Linear Algebra with Applications. 20 (2): 302. doi:10.1002/nla.1811.
  14. This can also be written as because , as is done in: Kulkarni, D.; Schmidt, D.; Tsui, S. K. (1999). "Eigenvalues of tridiagonal pseudo-Toeplitz matrices" (PDF). Linear Algebra and its Applications. 297: 63. doi:10.1016/S0024-3795(99)00114-7.
  15. Parlett, B.N. (1980). The Symmetric Eigenvalue Problem. Prentice Hall, Inc.
  16. Coakley, E.S.; Rokhlin, V. (2012). "A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices". Applied and Computational Harmonic Analysis . 34 (3): 379–414. doi: 10.1016/j.acha.2012.06.003 .
  17. Dhillon, Inderjit Singh. A New O(n 2 ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem (PDF). p. 8.
  18. "www.math.hkbu.edu.hk math lecture" (PDF).
  19. Eidelman, Yuli; Gohberg, Israel; Gemignani, Luca (2007-01-01). "On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms". Linear Algebra and its Applications. 420 (1): 86–101. doi: 10.1016/j.laa.2006.06.028 . ISSN   0024-3795.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In mechanics and geometry, the 3D rotation group, often denoted O(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an complex matrix is an matrix obtained by transposing and applying complex conjugation to each entry. There are several notations, such as or , , or .

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.

In mathematics, the Cayley transform, named after Arthur Cayley, is any of a cluster of related things. As originally described by Cayley (1846), the Cayley transform is a mapping between skew-symmetric matrices and special orthogonal matrices. The transform is a homography used in real analysis, complex analysis, and quaternionic analysis. In the theory of Hilbert spaces, the Cayley transform is a mapping between linear operators.

Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is the symmetric matrix that defines the quadratic form, and is any invertible matrix such that is diagonal, then the number of negative elements in the diagonal of is always the same, for all such ; and the same goes for the number of positive elements.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.