Split-octonion

Last updated

In mathematics, the split-octonions are an 8-dimensional nonassociative algebra over the real numbers. Unlike the standard octonions, they contain non-zero elements which are non-invertible. Also the signatures of their quadratic forms differ: the split-octonions have a split signature (4,4) whereas the octonions have a positive-definite signature (8,0).

Contents

Up to isomorphism, the octonions and the split-octonions are the only two 8-dimensional composition algebras over the real numbers. They are also the only two octonion algebras over the real numbers. Split-octonion algebras analogous to the split-octonions can be defined over any field.

Definition

Cayley–Dickson construction

The octonions and the split-octonions can be obtained from the Cayley–Dickson construction by defining a multiplication on pairs of quaternions. We introduce a new imaginary unit ℓ and write a pair of quaternions (a, b) in the form a + ℓb. The product is defined by the rule: [1]

where

If λ is chosen to be 1, we get the octonions. If, instead, it is taken to be +1 we get the split-octonions. One can also obtain the split-octonions via a Cayley–Dickson doubling of the split-quaternions. Here either choice of λ (±1) gives the split-octonions.

Multiplication table

A mnemonic for the products of the split octonions. SplitFanoPlane.svg
A mnemonic for the products of the split octonions.

A basis for the split-octonions is given by the set .

Every split-octonion can be written as a linear combination of the basis elements,

with real coefficients .

By linearity, multiplication of split-octonions is completely determined by the following multiplication table:

multiplier
multiplicand

A convenient mnemonic is given by the diagram at the right, which represents the multiplication table for the split-octonions. This one is derived from its parent octonion (one of 480 possible), which is defined by:

where is the Kronecker delta and is the Levi-Civita symbol with value when and:

with the scalar element, and

The red arrows indicate possible direction reversals imposed by negating the lower right quadrant of the parent creating a split octonion with this multiplication table.

Conjugate, norm and inverse

The conjugate of a split-octonion x is given by

just as for the octonions.

The quadratic form on x is given by

This quadratic form N(x) is an isotropic quadratic form since there are non-zero split-octonions x with N(x) = 0. With N, the split-octonions form a pseudo-Euclidean space of eight dimensions over R, sometimes written R4,4 to denote the signature of the quadratic form.

If N(x) ≠ 0, then x has a (two-sided) multiplicative inverse x1 given by

Properties

The split-octonions, like the octonions, are noncommutative and nonassociative. Also like the octonions, they form a composition algebra since the quadratic form N is multiplicative. That is,

The split-octonions satisfy the Moufang identities and so form an alternative algebra. Therefore, by Artin's theorem, the subalgebra generated by any two elements is associative. The set of all invertible elements (i.e. those elements for which N(x) ≠ 0) form a Moufang loop.

The automorphism group of the split-octonions is a 14-dimensional Lie group, the split real form of the exceptional simple Lie group G2.

Zorn's vector-matrix algebra

Since the split-octonions are nonassociative they cannot be represented by ordinary matrices (matrix multiplication is always associative). Zorn found a way to represent them as "matrices" containing both scalars and vectors using a modified version of matrix multiplication. [2] Specifically, define a vector-matrix to be a 2×2 matrix of the form [3] [4] [5] [6]

where a and b are real numbers and v and w are vectors in R3. Define multiplication of these matrices by the rule

where · and × are the ordinary dot product and cross product of 3-vectors. With addition and scalar multiplication defined as usual the set of all such matrices forms a nonassociative unital 8-dimensional algebra over the reals, called Zorn's vector-matrix algebra.

Define the "determinant" of a vector-matrix by the rule

.

This determinant is a quadratic form on Zorn's algebra which satisfies the composition rule:

Zorn's vector-matrix algebra is, in fact, isomorphic to the algebra of split-octonions. Write an octonion in the form

where and are real numbers and v and w are pure imaginary quaternions regarded as vectors in R3. The isomorphism from the split-octonions to Zorn's algebra is given by

This isomorphism preserves the norm since .

Applications

Split-octonions are used in the description of physical law. For example:

Related Research Articles

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Cross product</span> Mathematical operation on vectors in 3D space

In mathematics, the cross product or vector product is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space, and is denoted by the symbol . Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

<span class="mw-page-title-main">Transpose</span> Matrix operation which flips a matrix over its diagonal

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

In linear algebra, the adjugate of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such thatwhere In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, the Heisenberg group, named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, a composition algebraA over a field K is a not necessarily associative algebra over K together with a nondegenerate quadratic form N that satisfies

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

In mathematics, Hurwitz's theorem is a theorem of Adolf Hurwitz (1859–1919), published posthumously in 1923, solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a nondegenerate positive-definite quadratic form. The theorem states that if the quadratic form defines a homomorphism into the positive real numbers on the non-zero part of the algebra, then the algebra must be isomorphic to the real numbers, the complex numbers, the quaternions, or the octonions, and that there are no other possibilities. Such algebras, sometimes called Hurwitz algebras, are examples of composition algebras.

References

  1. Kevin McCrimmon (2004) A Taste of Jordan Algebras, page 158, Universitext, Springer ISBN   0-387-95447-3 MR 2014924
  2. Max Zorn (1931) "Alternativekörper und quadratische Systeme", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 9(3/4): 395–402
  3. Nathan Jacobson (1962) Lie Algebras, page 142, Interscience Publishers.
  4. Schafer, Richard D. (1966). An Introduction to Nonassociative Algebras. Academic Press. pp. 52–6. ISBN   0-486-68813-5.
  5. Lowell J. Page (1963) "Jordan Algebras", pages 144–186 in Studies in Modern Algebra edited by A.A. Albert, Mathematics Association of America  : Zorn’s vector-matrix algebra on page 180
  6. Arthur A. Sagle & Ralph E. Walde (1973) Introduction to Lie Groups and Lie Algebras, page 199, Academic Press
  7. M. Gogberashvili (2006) "Octonionic Electrodynamics", Journal of Physics A 39: 7099-7104. doi : 10.1088/0305-4470/39/22/020
  8. V. Dzhunushaliev (2008) "Non-associativity, supersymmetry and hidden variables", Journal of Mathematical Physics 49: 042108 doi : 10.1063/1.2907868; arXiv : 0712.1647
  9. B. Wolk, Adv. Appl. Clifford Algebras 27(4), 3225 (2017).
  10. J. Baez and J. Huerta, G2 and the rolling ball, Trans. Amer. Math. Soc. 366, 5257-5293 (2014); arXiv : 1205.2447.

Further reading