Matrix decomposition

Last updated

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

Contents

Example

In numerical analysis, different decompositions are used to implement efficient matrix algorithms.

For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix L and an upper triangular matrix U. The systems and require fewer additions and multiplications to solve, compared with the original system , though one might require significantly more digits in inexact arithmetic such as floating point.

Similarly, the QR decomposition expresses A as QR with Q an orthogonal matrix and R an upper triangular matrix. The system Q(Rx) = b is solved by Rx = QTb = c, and the system Rx = c is solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition is numerically stable.

LU decomposition

LU reduction

Block LU decomposition

Rank factorization

Cholesky decomposition

QR decomposition

RRQR factorization

Interpolative decomposition

Eigendecomposition

Jordan decomposition

The Jordan normal form and the Jordan–Chevalley decomposition

Schur decomposition

Real Schur decomposition

QZ decomposition

Takagi's factorization

Singular value decomposition

Scale-invariant decompositions

Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling.

Analogous scale-invariant decompositions can be derived from other matrix decompositions, e.g., to obtain scale-invariant eigenvalues. [3] [4]

Other decompositions

Polar decomposition

Algebraic polar decomposition

Mostow's decomposition

Sinkhorn normal form

Sectoral decomposition

Williamson's normal form

Generalizations

There exist analogues of the SVD, QR, LU and Cholesky factorizations for quasimatrices and cmatrices or continuous matrices. [13] A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of an integral operator.

These factorizations are based on early work by Fredholm (1903), Hilbert (1904) and Schmidt (1907). For an account, and a translation to English of the seminal papers, see Stewart (2011).

See also

Notes

  1. Simon & Blume 1994 Chapter 7.
  2. Piziak, R.; Odell, P. L. (1 June 1999). "Full Rank Factorization of Matrices". Mathematics Magazine. 72 (3): 193. doi:10.2307/2690882. JSTOR   2690882.
  3. Uhlmann, J.K. (2018), "A Generalized Matrix Inverse that is Consistent with Respect to Diagonal Transformations", SIAM Journal on Matrix Analysis, 239 (2): 781–800, doi:10.1137/17M113890X
  4. Uhlmann, J.K. (2018), "A Rank-Preserving Generalized Matrix Inverse for Consistency with Respect to Similarity", IEEE Control Systems Letters, doi:10.1109/LCSYS.2018.2854240, ISSN   2475-1456
  5. Choudhury & Horn 1987 , pp. 219–225
  6. 1 2 3 Bhatia, Rajendra (2013-11-15). "The bipolar decomposition". Linear Algebra and Its Applications. 439 (10): 3031–3037. doi:10.1016/j.laa.2013.09.006.
  7. Horn & merino 1995 , pp. 43–92
  8. Mostow, G. D. (1955), Some new decomposition theorems for semi-simple groups, Mem. Amer. Math. Soc., 14, American Mathematical Society, pp. 31–54
  9. Nielsen, Frank; Bhatia, Rajendra (2012). Matrix Information Geometry. Springer. p. 224. arXiv: 1007.4402 . doi:10.1007/978-3-642-30232-9. ISBN   9783642302329.
  10. Zhang, Fuzhen (30 June 2014). "A matrix decomposition and its applications" (PDF). Linear and Multilinear Algebra. 63 (10): 2033–2042. doi:10.1080/03081087.2014.933219.
  11. Drury, S.W. (November 2013). "Fischer determinantal inequalities and Highamʼs Conjecture". Linear Algebra and Its Applications. 439 (10): 3129–3133. doi:10.1016/j.laa.2013.08.031.
  12. Idel, Martin; Soto Gaona, Sebastián; Wolf, Michael M. (2017-07-15). "Perturbation bounds for Williamson's symplectic normal form". Linear Algebra and Its Applications. 525: 45–58. arXiv: 1609.01338 . doi:10.1016/j.laa.2017.03.013.
  13. Townsend & Trefethen 2015

Related Research Articles

In linear algebra, a symmetric real matrix is said to be positive definite if the scalar is strictly positive for every non-zero column vector of real numbers. Here denotes the transpose of . When interpreting as the output of an operator, , that is acting on an input, , the property of positive definiteness implies that the output always has a positive inner product with the input, as often observed in physical processes.

An orthogonal matrix is a square matrix whose columns and rows are orthogonal unit vectors, i.e.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Symmetric matrix Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

Singular value decomposition Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix that generalizes the eigendecomposition of a square normal matrix to any matrix via an extension of the polar decomposition.

Square matrix matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. An example of a 2-by-2 diagonal matrix is , while an example of a 3-by-3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, a square matrix is called diagonalizable or nondefective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by . Diagonalization is the process of finding the above  and .

In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive-semidefinite Hermitian matrix, both square and of the same size.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

Numerical linear algebra is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to mathematical questions. It is a subfield of numerical analysis, and a type of linear algebra. Because computers use floating-point arithmetic, they cannot exactly represent irrational data, and many algorithms increase that imprecision when implemented by a computer. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize computer error while retaining efficiency and precision.

In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. It is named after James Joseph Sylvester.

In linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

References