Nilpotent matrix

Last updated

In linear algebra, a nilpotent matrix is a square matrix N such that

Contents

for some positive integer . The smallest such is called the index of , [1] sometimes the degree of .

More generally, a nilpotent transformation is a linear transformation of a vector space such that for some positive integer (and thus, for all ). [2] [3] [4] Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings.

Examples

Example 1

The matrix

is nilpotent with index 2, since .

Example 2

More generally, any -dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index [ citation needed ]. For example, the matrix

is nilpotent, with


The index of is therefore 3.

Example 3

Although the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example,

although the matrix has no zero entries.

Example 4

Additionally, any matrices of the form

such as

or

square to zero.

Example 5

Perhaps some of the most striking examples of nilpotent matrices are square matrices of the form:

The first few of which are:

These matrices are nilpotent but there are no zero entries in any powers of them less than the index. [5]

Example 6

Consider the linear space of polynomials of a bounded degree. The derivative operator is a linear map. We know that applying the derivative to a polynomial decreases its degree by one, so when applying it iteratively, we will eventually obtain zero. Therefore, on such a space, the derivative is representable by a nilpotent matrix.

Characterization

For an square matrix with real (or complex) entries, the following are equivalent:

The last theorem holds true for matrices over any field of characteristic 0 or sufficiently large characteristic. (cf. Newton's identities)

This theorem has several consequences, including:

See also: Jordan–Chevalley decomposition#Nilpotency criterion.

Classification

Consider the (upper) shift matrix:

This matrix has 1s along the superdiagonal and 0s everywhere else. As a linear transformation, the shift matrix "shifts" the components of a vector one position to the left, with a zero appearing in the last position:

[6]

This matrix is nilpotent with degree , and is the canonical nilpotent matrix.

Specifically, if is any nilpotent matrix, then is similar to a block diagonal matrix of the form

where each of the blocks is a shift matrix (possibly of different sizes). This form is a special case of the Jordan canonical form for matrices. [7]

For example, any nonzero 2 × 2 nilpotent matrix is similar to the matrix

That is, if is any nonzero 2 × 2 nilpotent matrix, then there exists a basis b1, b2 such that Nb1 = 0 and Nb2 = b1.

This classification theorem holds for matrices over any field. (It is not necessary for the field to be algebraically closed.)

Flag of subspaces

A nilpotent transformation on naturally determines a flag of subspaces

and a signature

The signature characterizes up to an invertible linear transformation. Furthermore, it satisfies the inequalities

Conversely, any sequence of natural numbers satisfying these inequalities is the signature of a nilpotent transformation.

Additional properties

Generalizations

A linear operator is locally nilpotent if for every vector , there exists a such that

For operators on a finite-dimensional vector space, local nilpotence is equivalent to nilpotence.

Notes

  1. Herstein (1975 , p. 294)
  2. Beauregard & Fraleigh (1973 , p. 312)
  3. Herstein (1975 , p. 268)
  4. Nering (1970 , p. 274)
  5. Mercer, Idris D. (31 October 2005). "Finding "nonobvious" nilpotent matrices" (PDF). idmercer.com. self-published; personal credentials: PhD Mathematics, Simon Fraser University . Retrieved 5 April 2023.
  6. Beauregard & Fraleigh (1973 , p. 312)
  7. Beauregard & Fraleigh (1973 , pp. 312, 313)
  8. R. Sullivan, Products of nilpotent matrices, Linear and Multilinear Algebra, Vol. 56, No. 3

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Matrix addition</span> Notions of sums for matrices in linear algebra

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, the Frobenius companion matrix of the monic polynomial

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In mathematics, a unipotent elementr of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n.

<span class="mw-page-title-main">Nilpotent Lie algebra</span>

In mathematics, a Lie algebra is nilpotent if its lower central series terminates in the zero subalgebra. The lower central series is the sequence of subalgebras

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

References