M-matrix

Last updated

In mathematics, especially linear algebra, an M-matrix is a matrix whose off-diagonal entries are less than or equal to zero (i.e., it is a Z-matrix) and whose eigenvalues have nonnegative real parts. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices (i.e. matrices with inverses belonging to the class of positive matrices). [1] The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive. [2]

Contents

Characterizations

An M-matrix is commonly defined as follows:

Definition: Let A be a n × n real Z-matrix. That is, A = (aij) where aij ≤ 0 for all ij, 1 ≤ i,jn. Then matrix A is also an M-matrix if it can be expressed in the form A = sIB, where B = (bij) with bij ≥ 0, for all 1 ≤ i,j ≤ n, where s is at least as large as the maximum of the moduli of the eigenvalues of B, and I is an identity matrix.

For the non-singularity of A, according to the Perron–Frobenius theorem, it must be the case that s > ρ(B). Also, for a non-singular M-matrix, the diagonal elements aii of A must be positive. Here we will further characterize only the class of non-singular M-matrices.

Many statements that are equivalent to this definition of non-singular M-matrices are known, and any one of these statements can serve as a starting definition of a non-singular M-matrix. [3] For example, Plemmons lists 40 such equivalences. [4] These characterizations has been categorized by Plemmons in terms of their relations to the properties of: (1) positivity of principal minors, (2) inverse-positivity and splittings, (3) stability, and (4) semipositivity and diagonal dominance. It makes sense to categorize the properties in this way because the statements within a particular group are related to each other even when matrix A is an arbitrary matrix, and not necessarily a Z-matrix. Here we mention a few characterizations from each category.

Equivalences

Below, denotes the element-wise order (not the usual positive semidefinite order on matrices). That is, for any real matrices A, B of size m × n, we write AB (or A > B) if aijbij (or aij > bij) for all i, j.

Let A be a n × n real Z-matrix, then the following statements are equivalent to A being a non-singular M-matrix:

Positivity of principal minors

Inverse-positivity and splittings

Stability

Semipositivity and diagonal dominance

Applications

The primary contributions to M-matrix theory has mainly come from mathematicians and economists. M-matrices are used in mathematics to establish bounds on eigenvalues and on the establishment of convergence criteria for iterative methods for the solution of large sparse systems of linear equations. M-matrices arise naturally in some discretizations of differential operators, such as the Laplacian, and as such are well-studied in scientific computing. M-matrices also occur in the study of solutions to linear complementarity problem. Linear complementarity problems arise in linear and quadratic programming, computational mechanics, and in the problem of finding equilibrium point of a bimatrix game. Lastly, M-matrices occur in the study of finite Markov chains in the field of probability theory and operations research like queuing theory. Meanwhile, economists have studied M-matrices in connection with gross substitutability, stability of a general equilibrium and Leontief's input–output analysis in economic systems. The condition of positivity of all principal minors is also known as the Hawkins–Simon condition in economic literature. [5] In engineering, M-matrices also occur in the problems of Lyapunov stability and feedback control in control theory and are related to Hurwitz matrices. In computational biology, M-matrices occur in the study of population dynamics.

See also

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is the symmetric matrix that defines the quadratic form, and is any invertible matrix such that is diagonal, then the number of negative elements in the diagonal of is always the same, for all such ; and the same goes for the number of positive elements.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics, a nonnegative matrix, written

In mathematics, the class of Z-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form:

In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative :

In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. It is named after James Joseph Sylvester.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.

In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation.

References

  1. Fujimoto, Takao & Ranade, Ravindra (2004), "Two Characterizations of Inverse-Positive Matrices: The Hawkins-Simon Condition and the Le Chatelier-Braun Principle" (PDF), Electronic Journal of Linear Algebra, 11: 59–65.
  2. Bermon, Abraham; Plemmons, Robert J. (1994), Nonnegative Matrices in the Mathematical Sciences, Philadelphia: Society for Industrial and Applied Mathematics, p. 134,161 (Thm. 2.3 and Note 6.1 of chapter 6), ISBN   0-89871-321-8 .
  3. Fiedler, M; Ptak, V. (1962), "On matrices with non-positive off-diagonal elements and positive principal minors", Czechoslovak Mathematical Journal, 12 (3): 382–400, doi: 10.21136/CMJ.1962.100526 .
  4. Plemmons, R.J. (1977), "M-Matrix Characterizations. I -- Nonsingular M-Matrices", Linear Algebra and its Applications, 18 (2): 175–188, doi: 10.1016/0024-3795(77)90073-8 .
  5. Nikaido, H. (1970). Introduction to Sets and Mappings in Modern Economics. New York: Elsevier. pp. 13–19. ISBN   0-444-10038-5.