Drazin inverse

Last updated

In mathematics, the Drazin inverse, named after Michael P. Drazin, is a kind of generalized inverse of a matrix.

Contents

Let A be a square matrix. The index of A is the least nonnegative integer k such that rank(Ak+1) = rank(Ak). The Drazin inverse of A is the unique matrix AD that satisfies

It's not a generalized inverse in the classical sense, since in general.

where is invertible with inverse and is a nilpotent matrix, then

The hyper-power sequence is

for convergence notice that

For or any regular with chosen such that the sequence tends to its Drazin inverse,

Jordan normal form and Jordan-Chevalley decomposition

As the definition of the Drazin inverse is invariant under matrix conjugations, writing , where J is in Jordan normal form, implies that . The Drazin inverse is then the operation that maps invertible Jordan blocks to their inverses, and nilpotent Jordan blocks to zero.

More generally, we may define the Drazin inverse over any perfect field, by using the Jordan-Chevalley decomposition where is semisimple and is nilpotent and both operators commute. The two terms can be block diagonalized with blocks corresponding to the kernel and cokernel of . The Drazin inverse in the same basis is then defined to be zero on the kernel of , and equal to the inverse of on the cokernel of .

See also

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In mathematics, the concept of an inverse element generalises the concepts of opposite and reciprocal of numbers.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a nilpotent matrix is a square matrix N such that

In mathematics, a unipotent elementr of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In mathematics, and in particular, algebra, a generalized inverse of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix .

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

<span class="mw-page-title-main">Weyr canonical form</span> A matrix canonical form

In mathematics, in linear algebra, a Weyr canonical form is a square matrix which induces "nice" properties with matrices it commutes with. It also has a particularly simple structure and the conditions for possessing a Weyr form are fairly weak, making it a suitable tool for studying classes of commuting matrices. A square matrix is said to be in the Weyr canonical form if the matrix has the structure defining the Weyr canonical form. The Weyr form was discovered by the Czech mathematician Eduard Weyr in 1885. The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the name Jordan canonical form. The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885. This form has been variously called as modified Jordan form,reordered Jordan form,second Jordan form, and H-form. The current terminology is credited to Shapiro who introduced it in a paper published in the American Mathematical Monthly in 1999.

References