In mathematics, especially linear algebra, an M-matrix is a matrix whose off-diagonal entries are less than or equal to zero (i.e., it is a Z-matrix) and whose eigenvalues have nonnegative real parts. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices (i.e. matrices with inverses belonging to the class of positive matrices). [1] The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive. [2]
An M-matrix is commonly defined as follows:
Definition: Let A be a n × n real Z-matrix. That is, A = (aij) where aij ≤ 0 for all i ≠ j, 1 ≤ i,j ≤ n. Then matrix A is also an M-matrix if it can be expressed in the form A = sI − B, where B = (bij) with bij ≥ 0, for all 1 ≤ i,j ≤ n, where s is at least as large as the maximum of the moduli of the eigenvalues of B, and I is an identity matrix.
For the non-singularity of A, according to the Perron–Frobenius theorem, it must be the case that s > ρ(B). Also, for a non-singular M-matrix, the diagonal elements aii of A must be positive. Here we will further characterize only the class of non-singular M-matrices.
Many statements that are equivalent to this definition of non-singular M-matrices are known, and any one of these statements can serve as a starting definition of a non-singular M-matrix. [3] For example, Plemmons lists 40 such equivalences. [4] These characterizations has been categorized by Plemmons in terms of their relations to the properties of: (1) positivity of principal minors, (2) inverse-positivity and splittings, (3) stability, and (4) semipositivity and diagonal dominance. It makes sense to categorize the properties in this way because the statements within a particular group are related to each other even when matrix A is an arbitrary matrix, and not necessarily a Z-matrix. Here we mention a few characterizations from each category.
Below, ≥ denotes the element-wise order (not the usual positive semidefinite order on matrices). That is, for any real matrices A, B of size m × n, we write A ≥ B (or A > B) if aij ≥ bij (or aij > bij) for all i, j.
Let A be a n × n real Z-matrix, then the following statements are equivalent to A being a non-singular M-matrix:
Positivity of principal minors
Inverse-positivity and splittings
Stability
Semipositivity and diagonal dominance
The primary contributions to M-matrix theory has mainly come from mathematicians and economists. M-matrices are used in mathematics to establish bounds on eigenvalues and on the establishment of convergence criteria for iterative methods for the solution of large sparse systems of linear equations. M-matrices arise naturally in some discretizations of differential operators, such as the Laplacian, and as such are well-studied in scientific computing. M-matrices also occur in the study of solutions to linear complementarity problem. Linear complementarity problems arise in linear and quadratic programming, computational mechanics, and in the problem of finding equilibrium point of a bimatrix game. Lastly, M-matrices occur in the study of finite Markov chains in the field of probability theory and operations research like queuing theory. Meanwhile, economists have studied M-matrices in connection with gross substitutability, stability of a general equilibrium and Leontief's input–output analysis in economic systems. The condition of positivity of all principal minors is also known as the Hawkins–Simon condition in economic literature. [5] In engineering, M-matrices also occur in the problems of Lyapunov stability and feedback control in control theory and are related to Hurwitz matrices. In computational biology, M-matrices occur in the study of population dynamics.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is a symmetric matrix, then for any invertible matrix , the number of positive, negative and zero eigenvalues of is constant. This result is particularly useful when is diagonal, as the inertia of a diagonal matrix can easily be obtained by looking at the sign of its diagonal elements.
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.
In mathematics, a nonnegative matrix, written
In mathematics, the class of Z-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form:
In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative :
In mathematics, specifically linear algebra, a real symmetric matrix A is copositive if
In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.
In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation.