Weakly chained diagonally dominant matrix

Last updated
Venn Diagram showing the containment of weakly chained diagonally dominant (WCDD) matrices relative to weakly diagonally dominant (WDD) and strictly diagonally dominant (SDD) matrices. WCDD Venn Diagram.png
Venn Diagram showing the containment of weakly chained diagonally dominant (WCDD) matrices relative to weakly diagonally dominant (WDD) and strictly diagonally dominant (SDD) matrices.

In mathematics, the weakly chained diagonally dominant matrices are a family of nonsingular matrices that include the strictly diagonally dominant matrices.

Contents

Definition

Preliminaries

We say row of a complex matrix is strictly diagonally dominant (SDD) if . We say is SDD if all of its rows are SDD. Weakly diagonally dominant (WDD) is defined with instead.

The directed graph associated with an complex matrix is given by the vertices and edges defined as follows: there exists an edge from if and only if .

Definition

A complex square matrix is said to be weakly chained diagonally dominant (WCDD) if

Example

The directed graph associated with the WCDD matrix in the example. The first row, which is SDD, is highlighted. Note that regardless of which node
i
{\displaystyle i}
we start at, we can find a walk
i
-
(
i
-
1
)
-
(
i
-
2
)
-
[?]
-
1
{\displaystyle i\rightarrow (i-1)\rightarrow (i-2)\rightarrow \cdots \rightarrow 1}
. Graph of a WCDD matrix.png
The directed graph associated with the WCDD matrix in the example. The first row, which is SDD, is highlighted. Note that regardless of which node we start at, we can find a walk .

The matrix

is WCDD.

Properties

Nonsingularity

A WCDD matrix is nonsingular. [1]

Proof: [2] Let be a WCDD matrix. Suppose there exists a nonzero in the null space of . Without loss of generality, let be such that for all . Since is WCDD, we may pick a walk ending at an SDD row .

Taking moduli on both sides of

and applying the triangle inequality yields

and hence row is not SDD. Moreover, since is WDD, the above chain of inequalities holds with equality so that whenever . Therefore, . Repeating this argument with , , etc., we find that is not SDD, a contradiction.

Recalling that an irreducible matrix is one whose associated directed graph is strongly connected, a trivial corollary of the above is that an irreducibly diagonally dominant matrix (i.e., an irreducible WDD matrix with at least one SDD row) is nonsingular. [3]

Relationship with nonsingular M-matrices

The following are equivalent: [4]

In fact, WCDD L-matrices were studied (by James H. Bramble and B. E. Hubbard) as early as 1964 in a journal article [5] in which they appear under the alternate name of matrices of positive type.

Moreover, if is an WCDD L-matrix, we can bound its inverse as follows: [6]

  where  

Note that is always zero and that the right-hand side of the bound above is whenever one or more of the constants is one.

Tighter bounds for the inverse of a WCDD L-matrix are known. [7] [8] [9] [10]

Applications

Due to their relationship with M-matrices (see above), WCDD matrices appear often in practical applications. An example is given below.

Monotone numerical schemes

WCDD L-matrices arise naturally from monotone approximation schemes for partial differential equations.

For example, consider the one-dimensional Poisson problem

  for  

with Dirichlet boundary conditions . Letting be a numerical grid (for some positive that divides unity), a monotone finite difference scheme for the Poisson problem takes the form of

  where  

and

Note that is a WCDD L-matrix.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix A is denoted det(A), det A, or |A|.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

Scalar multiplication

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements on the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal only.

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the British mathematician Henry John Stephen Smith.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

In linear algebra, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.

In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if

In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GLn(R) when R is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.

In mathematics, the class of Z-matrices are those matrices whose off-diagonal entries are less than or equal to zero; that is, the matrices of the form:

A quaternionic matrix is a matrix whose elements are quaternions.

In probability theory, a transition rate matrix is an array of numbers describing the instantaneous rate at which a continuous time Markov chain transitions between states.

References

  1. Shivakumar, P. N.; Chew, Kim Ho (1974). "A Sufficient Condition for Nonvanishing of Determinants" (PDF). Proceedings of the American Mathematical Society. 43 (1): 63. doi: 10.1090/S0002-9939-1974-0332820-0 . ISSN   0002-9939.
  2. Azimzadeh, Parsiad; Forsyth, Peter A. (2016). "Weakly Chained Matrices, Policy Iteration, and Impulse Control". SIAM Journal on Numerical Analysis. 54 (3): 1341–1364. arXiv: 1510.03928 . doi:10.1137/15M1043431. ISSN   0036-1429.
  3. Horn, Roger A.; Johnson, Charles R. (1990). "Matrix analysis". Cambridge University Press, Cambridge.Cite journal requires |journal= (help)
  4. Azimzadeh, Parsiad (2019). "A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-Matrix". Mathematics of Computation. 88 (316): 783–800. arXiv: 1701.06951 . Bibcode:2017arXiv170106951A. doi:10.1090/mcom/3347.
  5. Bramble, James H.; Hubbard, B. E. (1964). "On a finite difference analogue of an elliptic problem which is neither diagonally dominant nor of non-negative type". Journal of Mathematical Physics. 43: 117–132. doi:10.1002/sapm1964431117.
  6. Shivakumar, P. N.; Williams, Joseph J.; Ye, Qiang; Marinov, Corneliu A. (1996). "On Two-Sided Bounds Related to Weakly Diagonally Dominant M-Matrices with Application to Digital Circuit Dynamics". SIAM Journal on Matrix Analysis and Applications. 17 (2): 298–312. doi:10.1137/S0895479894276370. ISSN   0895-4798.
  7. Cheng, Guang-Hui; Huang, Ting-Zhu (2007). "An upper bound for of strictly diagonally dominant M-matrices". Linear Algebra and Its Applications. 426 (2–3): 667–673. doi: 10.1016/j.laa.2007.06.001 . ISSN   0024-3795.
  8. Li, Wen (2008). "The infinity norm bound for the inverse of nonsingular diagonal dominant matrices". Applied Mathematics Letters. 21 (3): 258–263. doi:10.1016/j.aml.2007.03.018. ISSN   0893-9659.
  9. Wang, Ping (2009). "An upper bound for of strictly diagonally dominant M-matrices". Linear Algebra and Its Applications. 431 (5–7): 511–517. doi: 10.1016/j.laa.2009.02.037 . ISSN   0024-3795.
  10. Huang, Ting-Zhu; Zhu, Yan (2010). "Estimation of for weakly chained diagonally dominant M-matrices". Linear Algebra and Its Applications. 432 (2–3): 670–677. doi: 10.1016/j.laa.2009.09.012 . ISSN   0024-3795.