Jordan matrix

Last updated

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has the following form:

Contents

Definition

Every Jordan block is specified by its dimension n and its eigenvalue , and is denoted as Jλ,n. It is an matrix of zeroes everywhere except for the diagonal, which is filled with and for the superdiagonal, which is composed of ones.

Any block diagonal matrix whose blocks are Jordan blocks is called a Jordan matrix. This (n1 + ⋯ + nr) × (n1 + ⋯ + nr) square matrix, consisting of r diagonal blocks, can be compactly indicated as or , where the i-th Jordan block is Jλi,ni.

For example, the matrix

is a 10 × 10 Jordan matrix with a 3 × 3 block with eigenvalue 0, two 2 × 2 blocks with eigenvalue the imaginary unit i, and a 3 × 3 block with eigenvalue 7. Its Jordan-block structure is written as either or diag(J0,3, Ji,2, Ji,2, J7,3).

Linear algebra

Any n × n square matrix A whose elements are in an algebraically closed field K is similar to a Jordan matrix J, also in , which is unique up to a permutation of its diagonal blocks themselves. J is called the Jordan normal form of A and corresponds to a generalization of the diagonalization procedure. [1] [2] [3] A diagonalizable matrix is similar, in fact, to a special case of Jordan matrix: the matrix whose blocks are all 1 × 1. [4] [5] [6]

More generally, given a Jordan matrix , that is, whose kth diagonal block, , is the Jordan block Jλk,mk and whose diagonal elements may not all be distinct, the geometric multiplicity of for the matrix J, indicated as , corresponds to the number of Jordan blocks whose eigenvalue is λ. Whereas the index of an eigenvalue for J, indicated as , is defined as the dimension of the largest Jordan block associated to that eigenvalue.

The same goes for all the matrices A similar to J, so can be defined accordingly with respect to the Jordan normal form of A for any of its eigenvalues . In this case one can check that the index of for A is equal to its multiplicity as a root of the minimal polynomial of A (whereas, by definition, its algebraic multiplicity for A, , is its multiplicity as a root of the characteristic polynomial of A; that is, ). An equivalent necessary and sufficient condition for A to be diagonalizable in K is that all of its eigenvalues have index equal to 1; that is, its minimal polynomial has only simple roots.

Note that knowing a matrix's spectrum with all of its algebraic/geometric multiplicities and indexes does not always allow for the computation of its Jordan normal form (this may be a sufficient condition only for spectrally simple, usually low-dimensional matrices). Indeed, determining the Jordan normal form is generally a computationally challenging task. From the vector space point of view, the Jordan normal form is equivalent to finding an orthogonal decomposition (that is, via direct sums of eigenspaces represented by Jordan blocks) of the domain which the associated generalized eigenvectors make a basis for.

Functions of matrices

Let (that is, a n × n complex matrix) and be the change of basis matrix to the Jordan normal form of A; that is, A = C−1JC. Now let f(z) be a holomorphic function on an open set such that ; that is, the spectrum of the matrix is contained inside the domain of holomorphy of f. Let

be the power series expansion of f around , which will be hereinafter supposed to be 0 for simplicity's sake. The matrix f(A) is then defined via the following formal power series

and is absolutely convergent with respect to the Euclidean norm of . To put it another way, f(A) converges absolutely for every square matrix whose spectral radius is less than the radius of convergence of f around 0 and is uniformly convergent on any compact subsets of satisfying this property in the matrix Lie group topology.

The Jordan normal form allows the computation of functions of matrices without explicitly computing an infinite series, which is one of the main achievements of Jordan matrices. Using the facts that the kth power () of a diagonal block matrix is the diagonal block matrix whose blocks are the kth powers of the respective blocks; that is, , and that Ak = C−1JkC, the above matrix power series becomes

where the last series need not be computed explicitly via power series of every Jordan block. In fact, if , any holomorphic function of a Jordan block has a finite power series around because . Here, is the nilpotent part of and has all 0's except 1's along the superdiagonal. Thus it is the following upper triangular matrix:

As a consequence of this, the computation of any function of a matrix is straightforward whenever its Jordan normal form and its change-of-basis matrix are known. For example, using , the inverse of is:

Also, specf(A) = f(specA); that is, every eigenvalue corresponds to the eigenvalue , but it has, in general, different algebraic multiplicity, geometric multiplicity and index. However, the algebraic multiplicity may be computed as follows:

The function f(T) of a linear transformation T between vector spaces can be defined in a similar way according to the holomorphic functional calculus, where Banach space and Riemann surface theories play a fundamental role. In the case of finite-dimensional spaces, both theories perfectly match.

Dynamical systems

Now suppose a (complex) dynamical system is simply defined by the equation

where is the (n-dimensional) curve parametrization of an orbit on the Riemann surface of the dynamical system, whereas A(c) is an n × n complex matrix whose elements are complex functions of a d-dimensional parameter .

Even if (that is, A continuously depends on the parameter c) the Jordan normal form of the matrix is continuously deformed almost everywhere on but, in general, not everywhere: there is some critical submanifold of on which the Jordan form abruptly changes its structure whenever the parameter crosses or simply "travels" around it (monodromy). Such changes mean that several Jordan blocks (either belonging to different eigenvalues or not) join to a unique Jordan block, or vice versa (that is, one Jordan block splits into two or more different ones). Many aspects of bifurcation theory for both continuous and discrete dynamical systems can be interpreted with the analysis of functional Jordan matrices.

From the tangent space dynamics, this means that the orthogonal decomposition of the dynamical system's phase space changes and, for example, different orbits gain periodicity, or lose it, or shift from a certain kind of periodicity to another (such as period-doubling, cfr. logistic map).

In a sentence, the qualitative behaviour of such a dynamical system may substantially change as the versal deformation of the Jordan normal form of A(c).

Linear ordinary differential equations

The simplest example of a dynamical system is a system of linear, constant-coefficient, ordinary differential equations; that is, let and :

whose direct closed-form solution involves computation of the matrix exponential:

Another way, provided the solution is restricted to the local Lebesgue space of n-dimensional vector fields , is to use its Laplace transform . In this case

The matrix function (AsI)−1 is called the resolvent matrix of the differential operator . It is meromorphic with respect to the complex parameter since its matrix elements are rational functions whose denominator is equal for all to det(AsI). Its polar singularities are the eigenvalues of A, whose order equals their index for it; that is, .

See also

Notes

  1. Beauregard & Fraleigh (1973 , pp. 310–316)
  2. Golub & Van Loan (1996 , p. 317)
  3. Nering (1970 , pp. 118–127)
  4. Beauregard & Fraleigh (1973 , pp. 270–274)
  5. Golub & Van Loan (1996 , p. 316)
  6. Nering (1970 , pp. 113–118)

Related Research Articles

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

In linear algebra, the Frobenius companion matrix of the monic polynomial

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In mathematics and functional analysis, a direct integral or Hilbert integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in 1949 by John von Neumann in one of the papers in the series On Rings of Operators. One of von Neumann's goals in this paper was to reduce the classification of von Neumann algebras on separable Hilbert spaces to the classification of so-called factors. Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Estimation of signal parameters via rotational invariance techniques</span>

Estimation theory, or estimation of signal parameters via rotational invariant techniques (ESPRIT) is a technique to determine parameters of a mixture of sinusoids in a background noise. This technique is first proposed for frequency estimation, however, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.

References