Nonnegative matrix

Last updated

In mathematics, a nonnegative matrix, written

is a matrix in which all the elements are equal to or greater than zero, that is,

Contents

A positive matrix is a matrix in which all the elements are strictly greater than zero. The set of positive matrices is the interior of the set of all non-negative matrices. While such matrices are commonly found, the term "positive matrix" is only occasionally used due to the possible confusion with positive-definite matrices, which are different. A matrix which is both non-negative and is positive semidefinite is called a doubly non-negative matrix.

A rectangular non-negative matrix can be approximated by a decomposition with two other non-negative matrices via non-negative matrix factorization.

Eigenvalues and eigenvectors of square positive matrices are described by the Perron–Frobenius theorem.

Properties

Inversion

The inverse of any non-singular M-matrix [ clarification needed ] is a non-negative matrix. If the non-singular M-matrix is also symmetric then it is called a Stieltjes matrix.

The inverse of a non-negative matrix is usually not non-negative. The exception is the non-negative monomial matrices: a non-negative matrix has non-negative inverse if and only if it is a (non-negative) monomial matrix. Note that thus the inverse of a positive matrix is not positive or even non-negative, as positive matrices are not monomial, for dimension n > 1.

Specializations

There are a number of groups of matrices that form specializations of non-negative matrices, e.g. stochastic matrix; doubly stochastic matrix; symmetric non-negative matrix.

See also

Bibliography

  1. Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN   0-89871-321-8.
  2. A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, 1979 (chapter 2), ISBN   0-12-092250-9
  3. R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990 (chapter 8).
  4. Krasnosel'skii, M. A. (1964). Positive Solutions of Operator Equations. Groningen: P.Noordhoff Ltd. pp. 381 pp.
  5. Krasnosel'skii, M. A.; Lifshits, Je.A.; Sobolev, A.V. (1990). Positive Linear Systems: The method of positive operators. Sigma Series in Applied Mathematics. Vol. 5. Berlin: Helderman Verlag. pp. 354 pp.
  6. Henryk Minc, Nonnegative matrices, John Wiley&Sons, New York, 1988, ISBN   0-471-83966-3
  7. Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) ISBN   978-0-387-29765-1
  8. Richard S. Varga 2002 Matrix Iterative Analysis, Second ed. (of 1962 Prentice Hall edition), Springer-Verlag.

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In mathematics, a generalized permutation matrix is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in, is one of the most notable results of the work of James Mercer (1883–1932). It is an important theoretical tool in the theory of integral equations; it is used in the Hilbert space theory of stochastic processes, for example the Karhunen–Loève theorem; and it is also used in the reproducing kernel Hilbert space theory where it characterizes a symmetric positive-definite kernel as a reproducing kernel.

In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means.

In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

The Birkhoff polytopeBn is the convex polytope in RN whose points are the doubly stochastic matrices, i.e., the n × n matrices whose entries are non-negative real numbers and whose rows and columns each add up to 1. It is named after Garrett Birkhoff.

In mathematics, the class of Z-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form:

In mathematics, a polynomial matrix or matrix of polynomials is a matrix whose elements are univariate or multivariate polynomials. Equivalently, a polynomial matrix is a polynomial whose coefficients are matrices.

In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative :

In mathematics, especially linear algebra, an M-matrix is a matrix whose off-diagonal entries are less than or equal to zero and whose eigenvalues have nonnegative real parts. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices. The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.

In mathematics, specifically linear algebra, a real matrix A is copositive if

In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

Sinkhorn's theorem states that every square matrix with positive entries can be written in a certain standard form.