Quaternionic matrix

Last updated

A quaternionic matrix is a matrix whose elements are quaternions.

Contents

Matrix operations

The quaternions form a noncommutative ring, and therefore addition and multiplication can be defined for quaternionic matrices as for matrices over any ring.

Addition. The sum of two quaternionic matrices A and B is defined in the usual way by element-wise addition:

Multiplication. The product of two quaternionic matrices A and B also follows the usual definition for matrix multiplication. For it to be defined, the number of columns of A must equal the number of rows of B. Then the entry in the ith row and jth column of the product is the dot product of the ith row of the first matrix with the jth column of the second matrix. Specifically:

For example, for

the product is

Since quaternionic multiplication is noncommutative, care must be taken to preserve the order of the factors when computing the product of matrices.

The identity for this multiplication is, as expected, the diagonal matrix I = diag(1, 1, ... , 1). Multiplication follows the usual laws of associativity and distributivity. The trace of a matrix is defined as the sum of the diagonal elements, but in general

Left scalar multiplication, and right scalar multiplication are defined by

Again, since multiplication is not commutative some care must be taken in the order of the factors. [1]

Determinants

There is no natural way to define a determinant for (square) quaternionic matrices so that the values of the determinant are quaternions. [2] Complex valued determinants can be defined however. [3] The quaternion a + bi + cj + dk can be represented as the 2×2 complex matrix

This defines a map Ψmn from the m by n quaternionic matrices to the 2m by 2n complex matrices by replacing each entry in the quaternionic matrix by its 2 by 2 complex representation. The complex valued determinant of a square quaternionic matrix A is then defined as det(Ψ(A)). Many of the usual laws for determinants hold; in particular, an n by n matrix is invertible if and only if its determinant is nonzero.

Applications

Quaternionic matrices are used in quantum mechanics [4] and in the treatment of multibody problems. [5]

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix A is denoted det(A), det A, or |A|.

Pauli matrices Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A.

Quaternion Noncommutative extension of the real numbers

In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space, or, equivalently, as the quotient of two vectors. Multiplication of quaternions is noncommutative.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

Cayley–Hamilton theorem Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

Symplectic group

In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F. The latter is called the compact symplectic group. Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.

Special unitary group

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

Scalar multiplication

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.

Rotation (mathematics)

Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space. A clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude.

In mathematics, the Cayley transform, named after Arthur Cayley, is any of a cluster of related things. As originally described by Cayley (1846), the Cayley transform is a mapping between skew-symmetric matrices and special orthogonal matrices. The transform is a homography used in real analysis, complex analysis, and quaternionic analysis. In the theory of Hilbert spaces, the Cayley transform is a mapping between linear operators.

In abstract algebra, the biquaternions are the numbers w + xi + yj + zk, where w, x, y, and z are complex numbers, or variants thereof, and the elements of {1, i, j, k} multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof:

In abstract algebra, the split-quaternions or coquaternions form an algebraic structure introduced by James Cockle in 1849 under the latter name. They form an associative algebra of dimension four over the real numbers.

Classical group

In mathematics, the classical groups are defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:

Hadamard product (matrices) Matrix operation

In mathematics, the Hadamard product is a binary operation that takes two matrices of the same dimensions and produces another matrix of the same dimension as the operands, where each element i, j is the product of elements i, j of the original two matrices. It is to be distinguished from the more common matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

In mathematics, Manin matrices, named after Yuri Manin who introduced them around 1987–88, are a class of matrices with elements in a not-necessarily commutative ring, which in a certain sense behave like matrices whose elements commute. In particular there is natural definition of the determinant for them and most linear algebra theorems like Cramer's rule, Cayley–Hamilton theorem, etc. hold true for them. Any matrix with commuting elements is a Manin matrix. These matrices have applications in representation theory in particular to Capelli's identity, Yangian and quantum integrable systems.

In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.

In mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a number. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors. The two matrices must have the same dimension—same number of rows and columns—but are not restricted to be square matrices.

References

  1. Tapp, Kristopher (2005). Matrix groups for undergraduates. AMS Bookstore. pp. 11 ff. ISBN   0-8218-3785-0.
  2. Helmer Aslaksen (1996). "Quaternionic determinants". The Mathematical Intelligencer . 18 (3): 57–65. doi:10.1007/BF03024312. S2CID   13958298.
  3. E. Study (1920). "Zur Theorie der linearen Gleichungen". Acta Mathematica (in German). 42 (1): 1–61. doi: 10.1007/BF02404401 .
  4. N. Rösch (1983). "Time-reversal symmetry, Kramers' degeneracy and the algebraic eigenvalue problem". Chemical Physics . 80 (1–2): 1–5. doi:10.1016/0301-0104(83)85163-5.
  5. Klaus Gürlebeck; Wolfgang Sprössig (1997). "Quaternionic matrices". Quaternionic and Clifford calculus for physicists and engineers . Wiley. pp.  32–34. ISBN   978-0-471-96200-7.