Complete orthogonal decomposition

Last updated

In linear algebra, the complete orthogonal decomposition is a matrix decomposition. [1] [2] It is similar to the singular value decomposition, but typically somewhat [3] cheaper to compute and in particular much cheaper and easier to update when the original matrix is slightly altered. [1]

Contents

Specifically, the complete orthogonal decomposition factorizes an arbitrary complex matrix into a product of three matrices, , where and are unitary matrices and is a triangular matrix. For a matrix of rank , the triangular matrix can be chosen such that only its top-left block is nonzero, making the decomposition rank-revealing.

For a matrix of size , assuming , the complete orthogonal decomposition requires floating point operations and auxiliary memory to compute, similar to other rank-revealing decompositions. [1] Crucially however, if a row/column is added or removed, its decomposition can be updated in operations. [1]

Because of its form, , the decomposition is also known as UTV decomposition. [4] Depending on whether a left-triangular or right-triangular matrix is used in place of , it is also referred to as ULV decomposition [3] or URV decomposition, [5] respectively.

Construction

The UTV decomposition is usually [6] [7] computed by means of a pair of QR decompositions: one QR decomposition is applied to the matrix from the left, which yields , another applied from the right, which yields , which "sandwiches" triangular matrix in the middle.

Let be a matrix of rank . One first performs a QR decomposition with column pivoting:

,

where is a permutation matrix, is a unitary matrix, is a upper triangular matrix and is a matrix. One then performs another QR decomposition on the adjoint of :

,

where is a unitary matrix and is an lower (left) triangular matrix. Setting yields the complete orthogonal (UTV) decomposition: [1]

.

Since any diagonal matrix is by construction triangular, the singular value decomposition, , where , is a special case of the UTV decomposition. Computing the SVD is slightly more expensive than the UTV decomposition, [3] but has a stronger rank-revealing property.

See also

Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares (LLS) problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.

In mathematics, low-rank approximation refers to the process of approximating a given matrix by a matrix of lower rank. More precisely, it is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

References

  1. 1 2 3 4 5 Golub, Gene; van Loon, Charles F. (15 October 1996). Matrix Computations (Third ed.). Johns Hopkins University Press. ISBN   0-8018-5414-8.
  2. Björck, Åke (December 1996). Numerical methods for least squares problems. SIAM. ISBN   0-89871-360-9.
  3. 1 2 3 Chandrasekaran, S.; Gu, M.; Pals, T. (January 2006). "A Fast ULV Decomposition Solver for Hierarchically Semiseparable Representations". SIAM Journal on Matrix Analysis and Applications. 28 (3): 603–622. doi:10.1137/S0895479803436652.
  4. Fierro, Ricardo D.; Hansen, Per Christian; Hansen, Peter Søren Kirk (1999). "UTV Tools: Matlab templates for rank-revealing UTV decompositions" (PDF). Numerical Algorithms. 20 (2/3): 165–194. doi:10.1023/A:1019112103049. S2CID   19861950.
  5. Adams, G.; Griffin, M.F.; Stewart, G.W. (1991). "Direction-of-arrival estimation using the rank-revealing URV decomposition". [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing. Proc. Of IEEE Internat. Conf. On Acoustics, Speech, and Signal Processing. pp. 1385-1388 vol.2. doi:10.1109/icassp.1991.150681. hdl:1903/555. ISBN   0-7803-0003-3. S2CID   9201732.
  6. "LAPACK – Complete Orthogonal Factorization". netlib.org.
  7. "Eigen::CompleteOrthogonalDecomposition". Eigen 3.3 reference documentation. Retrieved 2023-08-07.