Bidiagonalization

Last updated

Bidiagonalization is one of unitary (orthogonal) matrix decompositions such that U* AV = B, where U and V are unitary (orthogonal) matrices; * denotes Hermitian transpose; and B is upper bidiagonal. A is allowed to be rectangular.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, a complex square matrix U is unitary if its conjugate transpose U is also its inverse—that is, if

An orthogonal matrix is a square matrix whose columns and rows are orthogonal unit vectors, i.e.

For dense matrices, the left and right unitary matrices are obtained by a series of Householder reflections alternately applied from the left and right. This is known as Golub-Kahan bidiagonalization. For large matrices, they are calculated iteratively by using Lanczos method, referred to as Golub-Kahan-Lanczos method.

The Lanczos algorithm is a direct algorithm devised by Cornelius Lanczos that is an adaptation of power methods to find the most useful eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the vectors to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies. In their original work, these authors also suggested how to select a starting vector and suggested an empirically determined method for determining , the reduced number of vectors. Soon thereafter their work was followed by Paige, who also provided an error analysis. In 1988, Ojalvo produced a more detailed history of this algorithm and an efficient eigenvalue error test. Currently, the method is widely used in a variety of technical fields and has seen a number of variations.

Bidiagonalization has a very similar structure to the singular value decomposition (SVD). However, it is computed within finite operations, while SVD requires iterative schemes to find singular values. It is because the squared singular values are the roots of characteristic polynomials of A* A, where A is assumed to be tall.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix as coefficients. The characteristic polynomial of an endomorphism of vector spaces of finite dimension is the characteristic polynomial of the matrix of the endomorphism over any base; it does not depend on the choice of a basis. The characteristic equation is the equation obtained by equating to zero the characteristic polynomial.

Related Research Articles

Principal component analysis conversion of a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. If there are observations with variables, then the number of distinct principal components is . This transformation is defined in such a way that the first principal component has the largest possible variance, and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.

Symmetric matrix Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

Singular value decomposition matrix decomposition

In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix to any matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics.

In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In mathematics, and in particular linear algebra, a pseudoinverseA+ of a matrix A is a generalization of the inverse matrix. The most widely known type of matrix pseudoinverse is the Moore–Penrose inverse, which was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition. The two versions differ because one version decomposes two matrices and the other version uses a set of constraints imposed on the left and right singular vectors.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

In mathematics, particularly in linear algebra and functional analysis, the polar decomposition of a matrix or linear operator is a factorization analogous to the polar form of a nonzero complex number z as

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. Matrix B is said to be a square root of A if the matrix product BB is equal to A.

Numerical linear algebra is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to mathematical questions. It is a subfield of numerical analysis, and a type of linear algebra. Because computers use floating-point arithmetic, they cannot exactly represent irrational data, and many algorithms increase that imprecision when implemented by a computer. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize computer error while retaining efficiency and precision.

In linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.

SLEPc is a software library for the parallel computation of eigenvalues and eigenvectors of large, sparse matrices. It can be seen as a module of PETSc that provides solvers for different types of eigenproblems, including linear and nonlinear, as well as the SVD. Recent versions also include support for matrix functions. It uses the MPI standard for parallelization. Both real and complex arithmetic are supported, with single, double and quadruple precision.

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

Efficient Java Matrix Library (EJML) is a linear algebra library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.

References

    Gene H. Golub American statistician

    Gene Howard Golub, Fletcher Jones Professor of Computer Science at Stanford University, was one of the preeminent numerical analysts of his generation.

    Charles Francis Van Loan is an emeritus professor of computer science and the Joseph C. Ford Professor of Engineering at Cornell University, He is known for his expertise in numerical analysis, especially matrix computations.

    International Standard Book Number Unique numeric book identifier

    The International Standard Book Number (ISBN) is a numeric commercial book identifier which is intended to be unique. Publishers purchase ISBNs from an affiliate of the International ISBN Agency.