Frobenius theorem (real division algebras)

Last updated

In mathematics, more specifically in abstract algebra, the Frobenius theorem, proved by Ferdinand Georg Frobenius in 1877, characterizes the finite-dimensional associative division algebras over the real numbers. According to the theorem, every such algebra is isomorphic to one of the following:

Contents

These algebras have real dimension 1, 2, and 4, respectively. Of these three algebras, R and C are commutative, but H is not.

Proof

The main ingredients for the following proof are the Cayley–Hamilton theorem and the fundamental theorem of algebra.

Introducing some notation

Note that if zCR then Q(z; x) is irreducible over R.

The claim

The key to the argument is the following

Claim. The set V of all elements a of D such that a2 ≤ 0 is a vector subspace of D of dimension n − 1. Moreover D = RV as R-vector spaces, which implies that V generates D as an algebra.

Proof of Claim: Pick a in D with characteristic polynomial p(x). By the fundamental theorem of algebra, we can write

We can rewrite p(x) in terms of the polynomials Q(z; x):

Since zjCR, the polynomials Q(zj; x) are all irreducible over R. By the Cayley–Hamilton theorem, p(a) = 0 and because D is a division algebra, it follows that either ati = 0 for some i or that Q(zj; a) = 0 for some j. The first case implies that a is real. In the second case, it follows that Q(zj; x) is the minimal polynomial of a. Because p(x) has the same complex roots as the minimal polynomial and because it is real it follows that

Since p(x) is the characteristic polynomial of a the coefficient of x2k1 in p(x) is tr(a) up to a sign. Therefore, we read from the above equation we have: tr(a) = 0 if and only if Re(zj) = 0, in other words tr(a) = 0 if and only if a2 = −|zj|2 < 0.

So V is the subset of all a with tr(a) = 0. In particular, it is a vector subspace. The rank–nullity theorem then implies that V has dimension n − 1 since it is the kernel of . Since R and V are disjoint (i.e. they satisfy ), and their dimensions sum to n, we have that D = RV.

The finish

For a, b in V define B(a, b) = (−abba)/2. Because of the identity (a + b)2a2b2 = ab + ba, it follows that B(a, b) is real. Furthermore, since a2 ≤ 0, we have: B(a, a) > 0 for a ≠ 0. Thus B is a positive-definite symmetric bilinear form, in other words, an inner product on V.

Let W be a subspace of V that generates D as an algebra and which is minimal with respect to this property. Let e1, ..., en be an orthonormal basis of W with respect to B. Then orthonormality implies that:

If k = 0, then D is isomorphic to R.

If k = 1, then D is generated by 1 and e1 subject to the relation e2
1
= −1
. Hence it is isomorphic to C.

If k = 2, it has been shown above that D is generated by 1, e1, e2 subject to the relations

These are precisely the relations for H.

If k > 2, then D cannot be a division algebra. Assume that k > 2. Let u = e1e2ek. It is easy to see that u2 = 1 (this only works if k > 2). If D were a division algebra, 0 = u2 − 1 = (u − 1)(u + 1) implies u = ±1, which in turn means: ek = ∓e1e2 and so e1, ..., ek−1 generate D. This contradicts the minimality of W.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Field (mathematics)</span> Algebraic structure with addition, multiplication, and division

In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.

<span class="mw-page-title-main">Quaternion</span> Noncommutative extension of the complex numbers

In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. The algebra of quaternions is often denoted by H, or in blackboard bold by Although multiplication of quaternions is noncommutative, it gives a definition of the quotient of two vectors in a three-dimensional space. Quaternions are generally represented in the form

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, the adjugate of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In mathematics, the Weil conjectures were highly influential proposals by André Weil. They led to a successful multi-decade program to prove them, in which many leading researchers developed the framework of modern algebraic geometry and number theory.

In mathematics, the Heisenberg group, named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In commutative algebra and field theory, the Frobenius endomorphism is a special endomorphism of commutative rings with prime characteristic p, an important class that includes finite fields. The endomorphism maps every element to its p-th power. In certain contexts it is an automorphism, but this is not true in general.

In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In mathematics, a split-biquaternion is a hypercomplex number of the form

In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, Hurwitz's theorem is a theorem of Adolf Hurwitz (1859–1919), published posthumously in 1923, solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a positive-definite quadratic form. The theorem states that if the quadratic form defines a homomorphism into the positive real numbers on the non-zero part of the algebra, then the algebra must be isomorphic to the real numbers, the complex numbers, the quaternions, or the octonions. Such algebras, sometimes called Hurwitz algebras, are examples of composition algebras.

In mathematics, an algebraic number field is an extension field of the field of rational numbers such that the field extension has finite degree . Thus is a field that contains and has finite dimension when considered as a vector space over .

In computer algebra, a triangular decomposition of a polynomial system S is a set of simpler polynomial systems S1, ..., Se such that a point is a solution of S if and only if it is a solution of one of the systems S1, ..., Se.

In mathematics, symmetric cones, sometimes called domains of positivity, are open convex self-dual cones in Euclidean space which have a transitive group of symmetries, i.e. invertible operators that take the cone onto itself. By the Koecher–Vinberg theorem these correspond to the cone of squares in finite-dimensional real Euclidean Jordan algebras, originally studied and classified by Jordan, von Neumann & Wigner (1934). The tube domain associated with a symmetric cone is a noncompact Hermitian symmetric space of tube type. All the algebraic and geometric structures associated with the symmetric space can be expressed naturally in terms of the Jordan algebra. The other irreducible Hermitian symmetric spaces of noncompact type correspond to Siegel domains of the second kind. These can be described in terms of more complicated structures called Jordan triple systems, which generalize Jordan algebras without identity.

References