Smith normal form

Last updated

In mathematics, the Smith normal form (sometimes abbreviated SNF [1] ) is a normal form that can be defined for any matrix (not necessarily square) with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.

Contents

Definition

Let be a nonzero matrix over a principal ideal domain . There exist invertible and -matrices (with entries in ) such that the product is

and the diagonal elements satisfy for all . This is the Smith normal form of the matrix . The elements are unique up to multiplication by a unit and are called the elementary divisors, invariants, or invariant factors. They can be computed (up to multiplication by a unit) as

where (called i-th determinant divisor) equals the greatest common divisor of the determinants of all minors of the matrix and .

Example : For a matrix, with and .

Algorithm

The first goal is to find invertible square matrices and such that the product is diagonal. This is the hardest part of the algorithm. Once diagonality is achieved, it becomes relatively easy to put the matrix into Smith normal form. Phrased more abstractly, the goal is to show that, thinking of as a map from (the free -module of rank ) to (the free -module of rank ), there are isomorphisms and such that has the simple form of a diagonal matrix. The matrices and can be found by starting out with identity matrices of the appropriate size, and modifying each time a row operation is performed on in the algorithm by the corresponding column operation (for example, if row is added to row of , then column should be subtracted from column of to retain the product invariant), and similarly modifying for each column operation performed. Since row operations are left-multiplications and column operations are right-multiplications, this preserves the invariant where denote current values and denotes the original matrix; eventually the matrices in this invariant become diagonal. Only invertible row and column operations are performed, which ensures that and remain invertible matrices.

For , write for the number of prime factors of (these exist and are unique since any PID is also a unique factorization domain). In particular, is also a Bézout domain, so it is a gcd domain and the gcd of any two elements satisfies a Bézout's identity.

To put a matrix into Smith normal form, one can repeatedly apply the following, where loops from 1 to .

Step I: Choosing a pivot

Choose to be the smallest column index of with a non-zero entry, starting the search at column index if .

We wish to have ; if this is the case this step is complete, otherwise there is by assumption some with , and we can exchange rows and , thereby obtaining .

Our chosen pivot is now at position .

Step II: Improving the pivot

If there is an entry at position (k,jt) such that , then, letting , we know by the Bézout property that there exist σ, τ in R such that

By left-multiplication with an appropriate invertible matrix L, it can be achieved that row t of the matrix product is the sum of σ times the original row t and τ times the original row k, that row k of the product is another linear combination of those original rows, and that all other rows are unchanged. Explicitly, if σ and τ satisfy the above equation, then for and (which divisions are possible by the definition of β) one has

so that the matrix

is invertible, with inverse

Now L can be obtained by fitting into rows and columns t and k of the identity matrix. By construction the matrix obtained after left-multiplying by L has entry β at position (t,jt) (and due to our choice of α and γ it also has an entry 0 at position (k,jt), which is useful though not essential for the algorithm). This new entry β divides the entry that was there before, and so in particular ; therefore repeating these steps must eventually terminate. One ends up with a matrix having an entry at position (t,jt) that divides all entries in column jt.

Step III: Eliminating entries

Finally, adding appropriate multiples of row t, it can be achieved that all entries in column jt except for that at position (t,jt) are zero. This can be achieved by left-multiplication with an appropriate matrix. However, to make the matrix fully diagonal we need to eliminate nonzero entries on the row of position (t,jt) as well. This can be achieved by repeating the steps in Step II for columns instead of rows, and using multiplication on the right by the transpose of the obtained matrix L. In general this will result in the zero entries from the prior application of Step III becoming nonzero again.

However, notice that each application of Step II for either rows or columns must continue to reduce the value of , and so the process must eventually stop after some number of iterations, leading to a matrix where the entry at position (t,jt) is the only non-zero entry in both its row and column.

At this point, only the block of A to the lower right of (t,jt) needs to be diagonalized, and conceptually the algorithm can be applied recursively, treating this block as a separate matrix. In other words, we can increment t by one and go back to Step I.

Final step

Applying the steps described above to the remaining non-zero columns of the resulting matrix (if any), we get an -matrix with column indices where . The matrix entries are non-zero, and every other entry is zero.

Now we can move the null columns of this matrix to the right, so that the nonzero entries are on positions for . For short, set for the element at position .

The condition of divisibility of diagonal entries might not be satisfied. For any index for which , one can repair this shortcoming by operations on rows and columns and only: first add column to column to get an entry in column i without disturbing the entry at position , and then apply a row operation to make the entry at position equal to as in Step II; finally proceed as in Step III to make the matrix diagonal again. Since the new entry at position is a linear combination of the original , it is divisible by β.

The value does not change by the above operation (it is δ of the determinant of the upper submatrix), whence that operation does diminish (by moving prime factors to the right) the value of

So after finitely many applications of this operation no further application is possible, which means that we have obtained as desired.

Since all row and column manipulations involved in the process are invertible, this shows that there exist invertible and -matrices S, T so that the product S A T satisfies the definition of a Smith normal form. In particular, this shows that the Smith normal form exists, which was assumed without proof in the definition.

Applications

The Smith normal form is useful for computing the homology of a chain complex when the chain modules of the chain complex are finitely generated. For instance, in topology, it can be used to compute the homology of a finite simplicial complex or CW complex over the integers, because the boundary maps in such a complex are just integer matrices. It can also be used to determine the invariant factors that occur in the structure theorem for finitely generated modules over a principal ideal domain, which includes the fundamental theorem of finitely generated abelian groups.

The Smith normal form is also used in control theory to compute transmission and blocking zeros of a transfer function matrix. [2]

Example

As an example, we will find the Smith normal form of the following matrix over the integers.

The following matrices are the intermediate steps as the algorithm is applied to the above matrix.

So the Smith normal form is

and the invariant factors are 2, 2 and 156.

Run-time complexity

The Smith Normal Form of an N-by-N matrix A can be computed in time . [3] If the matrix is sparse, the computation is typically much faster.

Similarity

The Smith normal form can be used to determine whether or not matrices with entries over a common field are similar. Specifically two matrices A and B are similar if and only if the characteristic matrices and have the same Smith normal form (working in the PID ).

For example, with

A and B are similar because the Smith normal form of their characteristic matrices match, but are not similar to C because the Smith normal form of the characteristic matrices do not match.

See also

Notes

  1. Stanley, Richard P. (2016). "Smith normal form in combinatorics". Journal of Combinatorial Theory . Series A. 144: 476–495. arXiv: 1602.00166 . doi: 10.1016/j.jcta.2016.06.013 . S2CID   14400632.
  2. Maciejowski, Jan M. (1989). Multivariable feedback design. Wokingham, England: Addison-Wesley. ISBN   0201182432. OCLC   19456124.
  3. "Computation time of Smith normal form in Maple". MathOverflow. Retrieved 2024-04-05.

Related Research Articles

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such thatwhere In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix.

In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix. The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics. There are several different definitions and types of stochastic matrices:

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares (LLS) problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In linear algebra, a matrix is in row echelon form if it can be obtained as the result of Gaussian elimination. Every matrix can be put in row echelon form by applying a sequence of elementary row operations. The term echelon comes from the French échelon, and refers to the fact that the nonzero entries of a matrix in row echelon form look like an inverted staircase.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.

In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

In mathematics, the Robinson–Schensted–Knuth correspondence, also referred to as the RSK correspondence or RSK algorithm, is a combinatorial bijection between matrices A with non-negative integer entries and pairs (P,Q) of semistandard Young tableaux of equal shape, whose size equals the sum of the entries of A. More precisely the weight of P is given by the column sums of A, and the weight of Q by its row sums. It is a generalization of the Robinson–Schensted correspondence, in the sense that taking A to be a permutation matrix, the pair (P,Q) will be the pair of standard tableaux associated to the permutation under the Robinson–Schensted correspondence.

References