Nonnegative rank (linear algebra)

Last updated

In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative.

Contents

For example, the linear rank of a matrix is the smallest number of vectors, such that every column of the matrix can be written as a linear combination of those vectors. For the nonnegative rank, it is required that the vectors must have nonnegative entries, and also that the coefficients in the linear combinations are nonnegative.

Formal definition

There are several equivalent definitions, all modifying the definition of the linear rank slightly. Apart from the definition given above, there is the following: The nonnegative rank of a nonnegative m×n-matrix A is equal to the smallest number q such there exists a nonnegative m×q-matrix B and a nonnegative q×n-matrix C such that A = BC (the usual matrix product). To obtain the linear rank, drop the condition that B and C must be nonnegative.

Further, the nonnegative rank is the smallest number of nonnegative rank-one matrices into which the matrix can be decomposed additively:

where Rj ≥ 0 stands for "Rj is nonnegative". [1] (To obtain the usual linear rank, drop the condition that the Rj have to be nonnegative.)

Given a nonnegative matrix A the nonnegative rank of A satisfies

where denotes the usual linear rank of A.

A Fallacy

The rank of the matrix A is the largest number of columns which are linearly independent, i.e., none of the selected columns can be written as a linear combination of the other selected columns. It is not true that adding nonnegativity to this characterization gives the nonnegative rank: The nonnegative rank is in general less than or equal to the largest number of columns such that no selected column can be written as a nonnegative linear combination of the other selected columns.

Connection with the linear rank

It is always true that rank(A) ≤ rank+(A). In fact rank+(A) = rank(A) holds whenever rank(A) ≤ 2.

In the case rank(A) ≥ 3, however, rank(A) < rank+(A) is possible. For example, the matrix

satisfies rank(A) = 3 < 4 = rank+(A).

These two results (including the 4×4 matrix example above) were first provided by Thomas in a response [2] to a question posed in 1973 by Berman and Plemmons. [3]

Computing the nonnegative rank

The nonnegative rank of a matrix can be determined algorithmically. [4]

It has been proved that determining whether is NP-hard. [5]

Obvious questions concerning the complexity of nonnegative rank computation remain unanswered to date. For example, the complexity of determining the nonnegative rank of matrices of fixed rank k is unknown for k > 2.

Ancillary facts

Nonnegative rank has important applications in Combinatorial optimization: [6] The minimum number of facets of an extension of a polyhedron P is equal to the nonnegative rank of its so-called slack matrix. [7]

Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

Row and column spaces

In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

In linear algebra, a complex square matrix U is unitary if its conjugate transpose U* is also its inverse, that is, if

Singular value decomposition Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation

In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices.

Non-negative matrix factorization Algorithms for matrix decomposition

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

In mathematics, a nonnegative matrix, written

In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of a indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the Spectral theorem.

In mathematics, given an m × n matrix A of rank r, a rank decomposition or rank factorization of A is a factorization of A of the form A = CF, where C is an m × r matrix and F is an r × n matrix.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix A and a (column) vector of response variables y, the goal is to find

References

  1. Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences, SIAM
  2. L. B. Thomas. "Solution to Problem 73-14, Rank Factorization of Nonnegative Matrices", SIAM Review 16(3), 393-394, 1974
  3. Berman, A., Plemmons, R.J.: "Rank Factorization of Nonnegative Matrices", SIAM Review 15(3), 655, 1973
  4. J. Cohen and U. Rothblum. "Nonnegative ranks, decompositions and factorizations of nonnegative matrices". Linear Algebra and its Applications, 190:149–168, 1993.
  5. Stephen Vavasis, On the complexity of nonnegative matrix factorization, SIAM Journal on Optimization 20 (3) 1364-1377, 2009.
  6. Mihalis Yannakakis. Expressing combinatorial optimization problems by linear programs. J. Comput. Syst. Sci., 43(3):441–466, 1991.
  7. See this blog post Archived 2014-10-07 at the Wayback Machine