Characteristic polynomial

Last updated

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation, [1] [2] [3] is the equation obtained by equating the characteristic polynomial to zero.

Contents

In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix. [4]

Motivation

In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the corresponding eigenvalue is the measure of the resulting change of magnitude of the vector.

More precisely, suppose the transformation is represented by a square matrix Then an eigenvector and the corresponding eigenvalue must satisfy the equation or, equivalently (since ), where is the identity matrix, and (although the zero vector satisfies this equation for every it is not considered an eigenvector).

It follows that the matrix must be singular, and its determinant must be zero.

In other words, the eigenvalues of A are the roots of which is a monic polynomial in x of degree n if A is a n×n matrix. This polynomial is the characteristic polynomial of A.

Formal definition

Consider an matrix The characteristic polynomial of denoted by is the polynomial defined by [5] where denotes the identity matrix.

Some authors define the characteristic polynomial to be That polynomial differs from the one defined here by a sign so it makes no difference for properties like having as roots the eigenvalues of ; however the definition above always gives a monic polynomial, whereas the alternative definition is monic only when is even.

Examples

To compute the characteristic polynomial of the matrix the determinant of the following is computed: and found to be the characteristic polynomial of

Another example uses hyperbolic functions of a hyperbolic angle φ. For the matrix take Its characteristic polynomial is

Properties

The characteristic polynomial of a matrix is monic (its leading coefficient is ) and its degree is The most important fact about the characteristic polynomial was already mentioned in the motivational paragraph: the eigenvalues of are precisely the roots of (this also holds for the minimal polynomial of but its degree may be less than ). All coefficients of the characteristic polynomial are polynomial expressions in the entries of the matrix. In particular its constant coefficient of is the coefficient of is one, and the coefficient of is tr(−A) = −tr(A), where tr(A) is the trace of (The signs given here correspond to the formal definition given in the previous section; for the alternative definition these would instead be and (−1)n – 1 tr(A) respectively. [6] )

For a matrix the characteristic polynomial is thus given by

Using the language of exterior algebra, the characteristic polynomial of an matrix may be expressed as where is the trace of the th exterior power of which has dimension This trace may be computed as the sum of all principal minors of of size The recursive Faddeev–LeVerrier algorithm computes these coefficients more efficiently.

When the characteristic of the field of the coefficients is each such trace may alternatively be computed as a single determinant, that of the matrix,

The Cayley–Hamilton theorem states that replacing by in the characteristic polynomial (interpreting the resulting powers as matrix powers, and the constant term as times the identity matrix) yields the zero matrix. Informally speaking, every matrix satisfies its own characteristic equation. This statement is equivalent to saying that the minimal polynomial of divides the characteristic polynomial of

Two similar matrices have the same characteristic polynomial. The converse however is not true in general: two matrices with the same characteristic polynomial need not be similar.

The matrix and its transpose have the same characteristic polynomial. is similar to a triangular matrix if and only if its characteristic polynomial can be completely factored into linear factors over (the same is true with the minimal polynomial instead of the characteristic polynomial). In this case is similar to a matrix in Jordan normal form.

Characteristic polynomial of a product of two matrices

If and are two square matrices then characteristic polynomials of and coincide:

When is non-singular this result follows from the fact that and are similar:

For the case where both and are singular, the desired identity is an equality between polynomials in and the coefficients of the matrices. Thus, to prove this equality, it suffices to prove that it is verified on a non-empty open subset (for the usual topology, or, more generally, for the Zariski topology) of the space of all the coefficients. As the non-singular matrices form such an open subset of the space of all matrices, this proves the result.

More generally, if is a matrix of order and is a matrix of order then is and is matrix, and one has

To prove this, one may suppose by exchanging, if needed, and Then, by bordering on the bottom by rows of zeros, and on the right, by, columns of zeros, one gets two matrices and such that and is equal to bordered by rows and columns of zeros. The result follows from the case of square matrices, by comparing the characteristic polynomials of and

Characteristic polynomial of Ak

If is an eigenvalue of a square matrix with eigenvector then is an eigenvalue of because

The multiplicities can be shown to agree as well, and this generalizes to any polynomial in place of : [7]

Theorem   Let be a square matrix and let be a polynomial. If the characteristic polynomial of has a factorization then the characteristic polynomial of the matrix is given by

That is, the algebraic multiplicity of in equals the sum of algebraic multiplicities of in over such that In particular, and Here a polynomial for example, is evaluated on a matrix simply as

The theorem applies to matrices and polynomials over any field or commutative ring. [8] However, the assumption that has a factorization into linear factors is not always true, unless the matrix is over an algebraically closed field such as the complex numbers.

Proof

This proof only applies to matrices and polynomials over complex numbers (or any algebraically closed field). In that case, the characteristic polynomial of any square matrix can be always factorized as where are the eigenvalues of possibly repeated. Moreover, the Jordan decomposition theorem guarantees that any square matrix can be decomposed as where is an invertible matrix and is upper triangular with on the diagonal (with each eigenvalue repeated according to its algebraic multiplicity). (The Jordan normal form has stronger properties, but these are sufficient; alternatively the Schur decomposition can be used, which is less popular but somewhat easier to prove).

Let Then For an upper triangular matrix with diagonal the matrix is upper triangular with diagonal in and hence is upper triangular with diagonal Therefore, the eigenvalues of are Since is similar to it has the same eigenvalues, with the same algebraic multiplicities.

Secular function and secular equation

Secular function

The term secular function has been used for what is now called characteristic polynomial (in some literature the term secular function is still used). The term comes from the fact that the characteristic polynomial was used to calculate secular perturbations (on a time scale of a century, that is, slow compared to annual motion) of planetary orbits, according to Lagrange's theory of oscillations.

Secular equation

Secular equation may have several meanings.

For general associative algebras

The above definition of the characteristic polynomial of a matrix with entries in a field generalizes without any changes to the case when is just a commutative ring. Garibaldi (2004) defines the characteristic polynomial for elements of an arbitrary finite-dimensional (associative, but not necessarily commutative) algebra over a field and proves the standard properties of the characteristic polynomial in this generality.

See also

Related Research Articles

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such thatwhere In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Faddeev–LeVerrier algorithm</span>

In mathematics, the Faddeev–LeVerrier algorithm is a recursive method to calculate the coefficients of the characteristic polynomial of a square matrix, A, named after Dmitry Konstantinovich Faddeev and Urbain Le Verrier. Calculation of this polynomial yields the eigenvalues of A as its roots; as a matrix polynomial in the matrix A itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity ; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix .

References

  1. Guillemin, Ernst (1953). Introductory Circuit Theory. Wiley. pp. 366, 541. ISBN   0471330663.
  2. Forsythe, George E.; Motzkin, Theodore (January 1952). "An Extension of Gauss' Transformation for Improving the Condition of Systems of Linear Equations" (PDF). Mathematics of Computation. 6 (37): 18–34. doi: 10.1090/S0025-5718-1952-0048162-0 . Retrieved 3 October 2020.
  3. Frank, Evelyn (1946). "On the zeros of polynomials with complex coefficients". Bulletin of the American Mathematical Society. 52 (2): 144–157. doi: 10.1090/S0002-9904-1946-08526-2 .
  4. "Characteristic Polynomial of a Graph – Wolfram MathWorld" . Retrieved August 26, 2011.
  5. Steven Roman (1992). Advanced linear algebra (2 ed.). Springer. p.  137. ISBN   3540978372.
  6. Theorem 4 in these lecture notes
  7. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. pp. 108–109, Section 2.4.2. ISBN   978-0-521-54823-6.
  8. Lang, Serge (1993). Algebra. New York: Springer. p.567, Theorem 3.10. ISBN   978-1-4613-0041-0. OCLC   852792828.
  9. "secular equation" . Retrieved January 21, 2010.