Metzler matrix

Last updated

In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero):

Contents

It is named after the American economist Lloyd Metzler.

Metzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of nonnegative matrices to matrices of the form M + aI, where M is a Metzler matrix.

Definition and terminology

In mathematics, especially linear algebra, a matrix is called Metzler, quasipositive (or quasi-positive) or essentially nonnegative if all of its elements are non-negative except for those on the main diagonal, which are unconstrained. That is, a Metzler matrix is any matrix A which satisfies

Metzler matrices are also sometimes referred to as -matrices, as a Z-matrix is equivalent to a negated quasipositive matrix.

Properties

The exponential of a Metzler (or quasipositive) matrix is a nonnegative matrix because of the corresponding property for the exponential of a nonnegative matrix. This is natural, once one observes that the generator matrices of continuous-time finite-state Markov processes are always Metzler matrices, and that probability distributions are always non-negative.

A Metzler matrix has an eigenvector in the nonnegative orthant because of the corresponding property for nonnegative matrices.

Relevant theorems

See also

Bibliography


    Related Research Articles

    In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.

    In linear algebra, a symmetric real matrix is said to be positive-definite if the scalar is strictly positive for every non-zero column vector of real numbers. Here denotes the transpose of . When interpreting as the output of an operator, , that is acting on an input, , the property of positive definiteness implies that the output always has a positive inner product with the input, as often observed in physical processes. Put differently, that applying M to z (Mz) keeps the output in the direction of z.

    In linear algebra, the trace of a square matrix A, denoted , is defined to be the sum of elements on the main diagonal of A.

    Special linear group

    In mathematics, the special linear groupSL(n, F) of degree n over a field F is the set of n × n matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant

    In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. An example of a 2-by-2 diagonal matrix is , while an example of a 3-by-3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

    In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

    A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

    In linear algebra, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks, to Internet search engines and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

    In mathematics, a nonnegative matrix, written

    In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if

    In mathematics, the class of Z-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form:

    In mathematics, especially linear algebra, an M-matrix is a Z-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices. The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.

    In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well. The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis. In the finite dimensional setting it is also referred to as the matrix measure or the Lozinskiĭ measure.

    In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

    The iterative proportional fitting procedure is the operation of finding the fitted matrix which is the closest to an initial matrix but with the row and column totals of a target matrix . The fitted matrix being of the form , where and are diagonal matrices such that has the margins of . Some algorithms can be chosen to perform biproportion. We have also the entropy maximization, information loss minimization or RAS which consists of factoring the matrix rows to match the specified row totals, then factoring its columns to match the specified column totals; each step usually disturbs the previous step’s match, so these steps are repeated in cycles, re-adjusting the rows and columns in turn, until all specified marginal totals are satisfactorily approximated. However, all algorithms give the same solution. In three- or more-dimensional cases, adjustment steps are applied for the marginals of each dimension in turn, the steps likewise repeated in cycles.

    In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.

    Weakly chained diagonally dominant matrix

    In mathematics, the weakly chained diagonally dominant matrices are a family of nonsingular matrices that include the strictly diagonally dominant matrices.

    In mathematics, the class of L-matrices are those matrices whose off-diagonal entries are less than or equal to zero and whose diagonal entries are positive; that is, an L-matrix L satisfies

    The Hawkins–Simon condition refers to a result in mathematical economics, attributed to David Hawkins and Herbert A. Simon, that guarantees the existence of a non-negative output vector that solves the equilibrium relation in the input–output model where demand equals supply. More precisely, it states a condition for under which the input–output system

    Tau functions are an important ingredient in the modern theory of integrable systems, and have numerous applications in a variety of other domains. Before being named as such, they were effectively introduced by Ryogo Hirota in his direct method approach to integrable systems, based on expressing them in an equivalent bilinear form. The term Tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili equation, and related integrable hierarchies. It is a central ingredient in the theory of solitons. Tau functions also appear as matrix model partition functions in the spectral theory of Random Matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.