Canonical basis

Last updated

In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:

Contents

Representation theory

The canonical basis for the irreducible representations of a quantized enveloping algebra of type and also for the plus part of that algebra was introduced by Lusztig [2] by two methods: an algebraic one (using a braid group action and PBW bases) and a topological one (using intersection cohomology). Specializing the parameter to yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was not known earlier. Specializing the parameter to yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations was considered independently by Kashiwara; [3] it is sometimes called the crystal basis. The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara [4] (by an algebraic method) and by Lusztig [5] (by a topological method).

There is a general concept underlying these bases:

Consider the ring of integral Laurent polynomials with its two subrings and the automorphism defined by .

A precanonical structure on a free -module consists of

If a precanonical structure is given, then one can define the submodule of .

A canonical basis of the precanonical structure is then a -basis of that satisfies:

for all .

One can show that there exists at most one canonical basis for each precanonical structure. [6] A sufficient condition for existence is that the polynomials defined by satisfy and .

A canonical basis induces an isomorphism from to .

Hecke algebras

Let be a Coxeter group. The corresponding Iwahori-Hecke algebra has the standard basis , the group is partially ordered by the Bruhat order which is interval finite and has a dualization operation defined by . This is a precanonical structure on that satisfies the sufficient condition above and the corresponding canonical basis of is the Kazhdan–Lusztig basis

with being the Kazhdan–Lusztig polynomials.

Linear algebra

If we are given an n × n matrix and wish to find a matrix in Jordan normal form, similar to , we are interested only in sets of linearly independent generalized eigenvectors. A matrix in Jordan normal form is an "almost diagonal matrix," that is, as close to diagonal as possible. A diagonal matrix is a special case of a matrix in Jordan normal form. An ordinary eigenvector is a special case of a generalized eigenvector.

Every n × n matrix possesses n linearly independent generalized eigenvectors. Generalized eigenvectors corresponding to distinct eigenvalues are linearly independent. If is an eigenvalue of of algebraic multiplicity , then will have linearly independent generalized eigenvectors corresponding to .

For any given n × n matrix , there are infinitely many ways to pick the n linearly independent generalized eigenvectors. If they are chosen in a particularly judicious manner, we can use these vectors to show that is similar to a matrix in Jordan normal form. In particular,

Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.

Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors that are in the Jordan chain generated by are also in the canonical basis. [7]

Computation

Let be an eigenvalue of of algebraic multiplicity . First, find the ranks (matrix ranks) of the matrices . The integer is determined to be the first integer for which has rank (n being the number of rows or columns of , that is, is n × n).

Now define

The variable designates the number of linearly independent generalized eigenvectors of rank k (generalized eigenvector rank; see generalized eigenvector) corresponding to the eigenvalue that will appear in a canonical basis for . Note that

Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we can obtain the vectors explicitly (see generalized eigenvector). [8]

Example

This example illustrates a canonical basis with two Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order. [9] The matrix

has eigenvalues and with algebraic multiplicities and , but geometric multiplicities and .

For we have

has rank 5,
has rank 4,
has rank 3,
has rank 2.

Therefore

Thus, a canonical basis for will have, corresponding to one generalized eigenvector each of ranks 4, 3, 2 and 1.

For we have

has rank 5,
has rank 4.

Therefore

Thus, a canonical basis for will have, corresponding to one generalized eigenvector each of ranks 2 and 1.

A canonical basis for is

is the ordinary eigenvector associated with . and are generalized eigenvectors associated with . is the ordinary eigenvector associated with . is a generalized eigenvector associated with .

A matrix in Jordan normal form, similar to is obtained as follows:

where the matrix is a generalized modal matrix for and . [10]

See also

Notes

  1. Bronson (1970 , p. 196)
  2. Lusztig (1990)
  3. Kashiwara (1990)
  4. Kashiwara (1991)
  5. Lusztig (1991)
  6. Lusztig (1993 , p. 194)
  7. Bronson (1970 , pp. 196, 197)
  8. Bronson (1970 , pp. 197, 198)
  9. Nering (1970 , pp. 122, 123)
  10. Bronson (1970 , p. 203)

Related Research Articles

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

<span class="mw-page-title-main">Linear span</span> In linear algebra, generated subspace

In mathematics, the linear span (also called the linear hull or just span) of a set S of vectors (from a vector space), denoted span(S), is defined as the set of all linear combinations of the vectors in S. For example, two linearly independent vectors span a plane. It can be characterized either as the intersection of all linear subspaces that contain S, or as the smallest subspace containing S. The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Unitary group</span> Group of unitary matrices

In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C). Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group.

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, the matrix representation of conic sections permits the tools of linear algebra to be used in the study of conic sections. It provides easy ways to calculate a conic section's axis, vertices, tangents and the pole and polar relationship between points and lines of the plane determined by the conic. The technique does not require putting the equation of a conic section into a standard form, thus making it easier to investigate those conic sections whose axes are not parallel to the coordinate system.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.

In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.

References