Matrix field

Last updated

In abstract algebra, a matrix field is a field with matrices as elements. In field theory there are two types of fields: finite fields and infinite fields. There are several examples of matrix fields of different characteristic and cardinality.

Contents

There is a finite matrix field of cardinality p for each prime p. One can find several finite matrix fields of characteristic p for any given prime number p. In general, corresponding to each finite field there is a matrix field. Since any two finite fields of equal cardinality are isomorphic, the elements of a finite field can be represented by matrices. [1]

Contrary to the general case for matrix multiplication, multiplication is commutative in a matrix field (if the usual operations are used). Since addition and multiplication of matrices have all needed properties for field operations except for commutativity of multiplication and existence of multiplicative inverses, one way to verify if a set of matrices is a field with the usual operations of matrix sum and multiplication is to check whether

  1. the set is closed under addition, subtraction and multiplication;
  2. the neutral element for matrix addition (that is, the zero matrix) is included;
  3. multiplication is commutative;
  4. the set contains a multiplicative identity (note that this does not have to be the identity matrix); and
  5. each matrix that is not the zero matrix has a multiplicative inverse.

Examples

1. Take the set of all n×n matrices of the form

with that is, matrices filled with zeroes except for the first row, which is filled with the same real constant . These matrices are commutative for multiplication:

.

The multiplicative identity is .

The multiplicative inverse of a matrix with is given by

It is easy to see that this matrix field is isomorphic to the field of real numbers under the map .

2. The set of matrices of the form

where and range over the field of real numbers, forms a matrix field which is isomorphic to the field of complex numbers: corresponds to the real part of the number, while corresponds to the imaginary part. So the number , for example, would be represented as

One can easily verify that :

and also, by computing a matrix exponential, that Euler's identity, is valid:

.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

<span class="mw-page-title-main">Scalar multiplication</span> Algebraic operation

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.

In mathematics, the affine group or general affine group of any affine space is the group of all invertible affine transformations from the space into itself. In the case of a Euclidean space, the affine group consists of those functions from the space to itself such that the image of every line is a line.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852), who indirectly named them after Johann Friedrich Pfaff.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

A Frobenius matrix is a special kind of square matrix from numerical mathematics. A matrix is a Frobenius matrix if it has the following three properties:

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

References

  1. Lidl, Rudolf; Niederreiter, Harald (1986). Introduction to finite fields and their applications (1st ed.). Cambridge University Press. ISBN   0-521-30706-6.