In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context.
For example, if G is a finite group, then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility. For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple.
A square matrix (in other words a linear operator with V finite dimensional vector space) is said to be simple if its only invariant subspaces under T are {0} and V. If the field is algebraically closed (such as the complex numbers), then the only simple matrices are of size 1 by 1. A semi-simple matrix is one that is similar to a direct sum of simple matrices; if the field is algebraically closed, this is the same as being diagonalizable.
These notions of semi-simplicity can be unified using the language of semi-simple modules, and generalized to semi-simple categories.
If one considers all vector spaces (over a field, such as the real numbers), the simple vector spaces are those that contain no proper nontrivial subspaces. Therefore, the one-dimensional vector spaces are the simple ones. So it is a basic result of linear algebra that any finite-dimensional vector space is the direct sum of simple vector spaces; in other words, all finite-dimensional vector spaces are semi-simple.
A square matrix or, equivalently, a linear operator T on a finite-dimensional vector space V is called semi-simple if every T-invariant subspace has a complementary T-invariant subspace. [1] [2] This is equivalent to the minimal polynomial of T being square-free.
For vector spaces over an algebraically closed field F, semi-simplicity of a matrix is equivalent to diagonalizability. [1] This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any eigenbasis for this subspace can be extended to an eigenbasis of the full space.
For a fixed ring R, a nontrivial R-module M is simple, if it has no submodules other than 0 and M. An R-module M is semi-simple if every R-submodule of M is an R-module direct summand of M (the trivial module 0 is semi-simple, but not simple). For an R-module M, M is semi-simple if and only if it is the direct sum of simple modules (the trivial module is the empty direct sum). Finally, R is called a semi-simple ring if it is semi-simple as an R-module. As it turns out, this is equivalent to requiring that any finitely generated R-module M is semi-simple. [3]
Examples of semi-simple rings include fields and, more generally, finite direct products of fields. For a finite group G Maschke's theorem asserts that the group ring R[G] over some ring R is semi-simple if and only if R is semi-simple and |G| is invertible in R. Since the theory of modules of R[G] is the same as the representation theory of G on R-modules, this fact is an important dichotomy, which causes modular representation theory, i.e., the case when |G| does divide the characteristic of R to be more difficult than the case when |G| does not divide the characteristic, in particular if R is a field of characteristic zero. By the Artin–Wedderburn theorem, a unital Artinian ring R is semisimple if and only if it is (isomorphic to) , where each is a division ring and is the ring of n-by-n matrices with entries in D.
An operator T is semi-simple in the sense above if and only if the subalgebra generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of V is semi-simple.
As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example, any short exact sequence
of modules over a semi-simple ring must split, i.e., . From the point of view of homological algebra, this means that there are no non-trivial extensions. The ring Z of integers is not semi-simple: Z is not the direct sum of nZ and Z/n.
Many of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C. Briefly, a category is a collection of objects and maps between such objects, the idea being that the maps between the objects preserve some structure inherent in these objects. For example, R-modules and R-linear maps between them form a category, for any ring R.
An abelian category [4] C is called semi-simple if there is a collection of simple objects , i.e., ones with no subobject other than the zero object 0 and itself, such that any object X is the direct sum (i.e., coproduct or, equivalently, product) of finitely many simple objects. It follows from Schur's lemma that the endomorphism ring
in a semi-simple category is a product of matrix rings over division rings, i.e., semi-simple.
Moreover, a ring R is semi-simple if and only if the category of finitely generated R-modules is semisimple.
An example from Hodge theory is the category of polarizable pure Hodge structures , i.e., pure Hodge structures equipped with a suitable positive definite bilinear form. The presence of this so-called polarization causes the category of polarizable Hodge structures to be semi-simple. [5] Another example from algebraic geometry is the category of pure motives of smooth projective varieties over a field k modulo an adequate equivalence relation . As was conjectured by Grothendieck and shown by Jannsen, this category is semi-simple if and only if the equivalence relation is numerical equivalence. [6] This fact is a conceptual cornerstone in the theory of motives.
Semisimple abelian categories also arise from a combination of a t-structure and a (suitably related) weight structure on a triangulated category. [7]
One can ask whether the category of finite-dimensional representations of a group or a Lie algebra is semisimple, that is, whether every finite-dimensional representation decomposes as a direct sum of irreducible representations. The answer, in general, is no. For example, the representation of given by
is not a direct sum of irreducibles. [8] (There is precisely one nontrivial invariant subspace, the span of the first basis element, .) On the other hand, if is compact, then every finite-dimensional representation of admits an inner product with respect to which is unitary, showing that decomposes as a sum of irreducibles. [9] Similarly, if is a complex semisimple Lie algebra, every finite-dimensional representation of is a sum of irreducibles. [10] Weyl's original proof of this used the unitarian trick: Every such is the complexification of the Lie algebra of a simply connected compact Lie group . Since is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of and of . [11] Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of directly by algebraic means, as in Section 10.3 of Hall's book.
See also: Fusion category (which are semisimple).
In mathematics, a Lie algebra is a vector space together with an operation called the Lie bracket, an alternating bilinear map , that satisfies the Jacobi identity. Otherwise said, a Lie algebra is an algebra over a field where the multiplication operation is now called Lie bracket and has two additional properties: it is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors and is denoted . The Lie bracket does not need to be associative, meaning that the Lie algebra can be non associative. Given an associative algebra, a Lie bracket can be and is often defined through the commutator, namely defining correctly defines a Lie bracket in addition to the already existing multiplication operation.
In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.
In mathematics and theoretical physics, a representation of a Lie group is a linear action of a Lie group on a vector space. Equivalently, a representation is a smooth homomorphism of the group into the group of invertible operators on the vector space. Representations play an important role in the study of continuous symmetry. A great deal is known about such representations, a basic tool in their study being the use of the corresponding 'infinitesimal' representations of Lie algebras.
In the mathematical field of representation theory, a Lie algebra representation or representation of a Lie algebra is a way of writing a Lie algebra as a set of matrices in such a way that the Lie bracket is given by the commutator. In the language of physics, one looks for a vector space together with a collection of operators on satisfying some fixed set of commutation relations, such as the relations satisfied by the angular momentum operators.
In mathematics, a linear algebraic group is a subgroup of the group of invertible matrices that is defined by polynomial equations. An example is the orthogonal group, defined by the relation where is the transpose of .
In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N, i.e. φ is a self-map; in particular, any element of the center of a group must act as a scalar operator on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which are due to Jacques Dixmier and Daniel Quillen.
In mathematics, a compact (topological) group is a topological group whose topology realizes it as a compact topological space. Compact groups are a natural generalization of finite groups with the discrete topology and have properties that carry over in significant fashion. Compact groups have a well-understood theory, in relation to group actions and representation theory.
In mathematics, a reductive group is a type of linear algebraic group over a field. One definition is that a connected linear algebraic group G over a perfect field is reductive if it has a representation with finite kernel which is a direct sum of irreducible representations. Reductive groups include some of the most important groups in mathematics, such as the general linear group GL(n) of invertible matrices, the special orthogonal group SO(n), and the symplectic group Sp(2n). Simple algebraic groups and (more generally) semisimple algebraic groups are reductive.
In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras..
Verma modules, named after Daya-Nand Verma, are objects in the representation theory of Lie algebras, a branch of mathematics.
In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if is a finite-dimensional representation of a solvable Lie algebra, then there's a flag of invariant subspaces of with , meaning that for each and i.
In mathematics, the Harish-Chandra isomorphism, introduced by Harish-Chandra (1951), is an isomorphism of commutative rings constructed in the theory of Lie algebras. The isomorphism maps the center of the universal enveloping algebra of a reductive Lie algebra to the elements of the symmetric algebra of a Cartan subalgebra that are invariant under the Weyl group .
In mathematics, a linear operator T on a vector space is semisimple if every T-invariant subspace has a complementary T-invariant subspace; in other words, the vector space is a semisimple representation of the operator T. Equivalently, a linear operator is semisimple if the minimal polynomial of it is a product of distinct irreducible polynomials.
Schur–Weyl duality is a mathematical theorem in representation theory that relates irreducible finite-dimensional representations of the general linear and symmetric groups. It is named after two pioneers of representation theory of Lie groups, Issai Schur, who discovered the phenomenon, and Hermann Weyl, who popularized it in his books on quantum mechanics and classical groups as a way of classifying representations of unitary and general linear groups.
Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations. The theory of matrices and linear operators is well-understood, so representations of more abstract objects in terms of familiar linear algebra objects helps glean properties and sometimes simplify calculations on more abstract theories.
In algebra, Weyl's theorem on complete reducibility is a fundamental result in the theory of Lie algebra representations. Let be a semisimple Lie algebra over a field of characteristic zero. The theorem states that every finite-dimensional module over is semisimple as a module
This is a glossary of representation theory in mathematics.
In mathematics, specifically in representation theory, a semisimple representation is a linear representation of a group or an algebra that is a direct sum of simple representations. It is an example of the general mathematical notion of semisimplicity.
This is a glossary for the terminology applied in the mathematical theories of Lie groups and Lie algebras. For the topics in the representation theory of Lie groups and Lie algebras, see Glossary of representation theory. Because of the lack of other options, the glossary also includes some generalizations such as quantum group.
In mathematics, the representation theory of semisimple Lie algebras is one of the crowning achievements of the theory of Lie groups and Lie algebras. The theory was worked out mainly by E. Cartan and H. Weyl and because of that, the theory is also known as the Cartan–Weyl theory. The theory gives the structural description and classification of a finite-dimensional representation of a semisimple Lie algebra ; in particular, it gives a way to parametrize irreducible finite-dimensional representations of a semisimple Lie algebra, the result known as the theorem of the highest weight.