Nilpotent matrix

Last updated

In linear algebra, a nilpotent matrix is a square matrix N such that

Contents

for some positive integer . The smallest such is called the index of , [1] sometimes the degree of .

More generally, a nilpotent transformation is a linear transformation of a vector space such that for some positive integer (and thus, for all ). [2] [3] [4] Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings.

Examples

Example 1

The matrix

is nilpotent with index 2, since .

Example 2

More generally, any -dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index [ citation needed ]. For example, the matrix

is nilpotent, with

The index of is therefore 4.

Example 3

Although the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example,

although the matrix has no zero entries.

Example 4

Additionally, any matrices of the form

such as

or

square to zero.

Example 5

Perhaps some of the most striking examples of nilpotent matrices are square matrices of the form:

The first few of which are:

These matrices are nilpotent but there are no zero entries in any powers of them less than the index. [5]

Example 6

Consider the linear space of polynomials of a bounded degree. The derivative operator is a linear map. We know that applying the derivative to a polynomial decreases its degree by one, so when applying it iteratively, we will eventually obtain zero. Therefore, on such a space, the derivative is representable by a nilpotent matrix.

Characterization

For an square matrix with real (or complex) entries, the following are equivalent:

The last theorem holds true for matrices over any field of characteristic 0 or sufficiently large characteristic. (cf. Newton's identities)

This theorem has several consequences, including:

See also: Jordan–Chevalley decomposition#Nilpotency criterion.

Classification

Consider the (upper) shift matrix:

This matrix has 1s along the superdiagonal and 0s everywhere else. As a linear transformation, the shift matrix "shifts" the components of a vector one position to the left, with a zero appearing in the last position:

[6]

This matrix is nilpotent with degree , and is the canonical nilpotent matrix.

Specifically, if is any nilpotent matrix, then is similar to a block diagonal matrix of the form

where each of the blocks is a shift matrix (possibly of different sizes). This form is a special case of the Jordan canonical form for matrices. [7]

For example, any nonzero 2 × 2 nilpotent matrix is similar to the matrix

That is, if is any nonzero 2 × 2 nilpotent matrix, then there exists a basis b1, b2 such that Nb1 = 0 and Nb2 = b1.

This classification theorem holds for matrices over any field. (It is not necessary for the field to be algebraically closed.)

Flag of subspaces

A nilpotent transformation on naturally determines a flag of subspaces

and a signature

The signature characterizes up to an invertible linear transformation. Furthermore, it satisfies the inequalities

Conversely, any sequence of natural numbers satisfying these inequalities is the signature of a nilpotent transformation.

Additional properties

Generalizations

A linear operator is locally nilpotent if for every vector , there exists a such that

For operators on a finite-dimensional vector space, local nilpotence is equivalent to nilpotence.

Notes

  1. Herstein (1975 , p. 294)
  2. Beauregard & Fraleigh (1973 , p. 312)
  3. Herstein (1975 , p. 268)
  4. Nering (1970 , p. 274)
  5. Mercer, Idris D. (31 October 2005). "Finding "nonobvious" nilpotent matrices" (PDF). idmercer.com. self-published; personal credentials: PhD Mathematics, Simon Fraser University . Retrieved 5 April 2023.
  6. Beauregard & Fraleigh (1973 , p. 312)
  7. Beauregard & Fraleigh (1973 , pp. 312, 313)
  8. R. Sullivan, Products of nilpotent matrices, Linear and Multilinear Algebra, Vol. 56, No. 3

Related Research Articles

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Matrix addition</span> Notions of sums for matrices in linear algebra

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

<span class="mw-page-title-main">Block matrix</span> Matrix defined using smaller matrices called blocks

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

In linear algebra, the Frobenius companion matrix of the monic polynomial is the square matrix defined as

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which the map maps to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In mathematics, a unipotent elementr of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n.

In mathematics, Dodgson condensation or method of contractants is a method of computing the determinants of square matrices. It is named for its inventor, Charles Lutwidge Dodgson (better known by his pseudonym, as Lewis Carroll, the popular author), who discovered it in 1866. The method in the case of an n × n matrix is to construct an (n − 1) × (n − 1) matrix, an (n − 2) × (n − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix.

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

References