This article does not cite any sources . (May 2014) (Learn how and when to remove this template message) |
In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map ). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted meaning the representation of the abstract algebra where
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
Eigenvalues and eigenvectors can be generalized to algebra representations.
The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation (i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative). [note 1] This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.
The case of the eigenvalue of a single operator corresponds to the algebra and a map of algebras is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight. In symbols, a weight vector is a vector such that for all elements for some linear functional – note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.
Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra – equivalently, it vanishes on the derived algebra – in terms of matrices, if is a common eigenvector of operators and , then (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a -tuple of scalars corresponding to the eigenvalue of each matrix, and hence geometrically to a point in -space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on generators, it corresponds geometrically to an algebraic variety in -dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.
In mathematics, an associative algebra is an algebraic structure with compatible operations of addition, multiplication, and a scalar multiplication by elements in some field. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over the field K. A standard first example of a K-algebra is a ring of square matrices over a field K, with the usual matrix multiplication.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.
In mathematics, the spectrum of a matrix is the set of its eigenvalues. More generally, if is a linear operator over any finite-dimensional vector space, its spectrum is the set of scalars such that is not invertible. The determinant of the matrix equals the product of its eigenvalues. Similarly, the trace of the matrix equals the sum of its eigenvalues. From this point of view, we can define the pseudo-determinant for a singular matrix to be the product of its nonzero eigenvalues.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of vector spaces of finite dimension is the characteristic polynomial of the matrix of the endomorphism over any base; it does not depend on the choice of a basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating to zero the characteristic polynomial.
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of , and the diagonal entries of are the corresponding eigenvalues of ; with respect to this eigenvector basis, is represented by .Diagonalization is the process of finding the above and .
In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.
In linear algebra, a Jordan normal form, also known as a Jordan canonical form or JCF, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.
In mathematics, and in particular the theory of group representations, the regular representation of a group G is the linear representation afforded by the group action of G on itself by translation.
In mathematics, a Casimir element is a distinguished element of the center of the universal enveloping algebra of a Lie algebra. A prototypical example is the squared angular momentum operator, which is a Casimir element of the three-dimensional rotation group.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations of a group G and φ is a linear transformation from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N and φ is a self-map; in particular, any element of the center of a group must act as a scalar operator on M. The lemma is named after Issai Schur who used it to prove Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which is due to Jacques Dixmier.
In the study of the representation theory of Lie groups, the study of representations of SU(2) is fundamental to the study of representations of semisimple Lie groups. It is the first case of a Lie group that is both a compact group and a non-abelian group. The first condition implies the representation theory is discrete: representations are direct sums of a collection of basic irreducible representations. The second means that there will be irreducible representations in dimensions greater than 1.
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.
In linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:
In linear algebra, two matrices and are said to commute if , or equivalently if their commutator is zero. A set of matrices is said to commute if they commute pairwise, meaning that every pair of matrices in the set commute with each other.
In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.
In mathematics, the representation theory of semisimple Lie algebras is one of crowning achievements of the theory of Lie groups and Lie algebras. The theory was worked out mainly by E. Cartan and H. Weyl and because of that, the theory is also known as the Cartan–Weyl theory. The theory gives the structural description and classification of a finite-dimensional representation of a semisimple Lie algebra ; in particular, it gives a way to parametrize irreducible finite-dimensional representations of a semisimple Lie algebra, the result known as the theorem of the highest weight.