Matrix equivalence

Last updated

In linear algebra, two rectangular m-by-n matrices A and B are called equivalent if

Contents

for some invertible n-by-n matrix P and some invertible m-by-m matrix Q. Equivalent matrices represent the same linear transformation V  W under two different choices of a pair of bases of V and W, with P and Q being the change of basis matrices in V and W respectively.

The notion of equivalence should not be confused with that of similarity, which is only defined for square matrices, and is much more restrictive (similar matrices are certainly equivalent, but equivalent square matrices need not be similar). That notion corresponds to matrices representing the same endomorphism V  V under two different choices of a single basis of V, used both for initial vectors and their images.

Properties

Matrix equivalence is an equivalence relation on the space of rectangular matrices.

For two rectangular matrices of the same size, their equivalence can also be characterized by the following conditions

If matrices are row equivalent then they are also matrix equivalent. However, the converse does not hold; matrices that are matrix equivalent are not necessarily row equivalent. This makes matrix equivalence a generalization of row equivalence. [1]

Canonical form

The rank property yields an intuitive canonical form for matrices of the equivalence class of rank as

,

where the number of s on the diagonal is equal to . This is a special case of the Smith normal form, which generalizes this concept on vector spaces to free modules over principal ideal domains. Thus:

Theorem: Any mxn matrix of rank k is matrix equivalent to the mxn matrix that is all zeroes except that the first k diagonal entries are ones. [1] Corollary: Matrix equivalent classes are characterized by rank: two same-sided matrixes are matrix equivalent if and only if they have the same rank. [1]

2x2 Matrices

2x2 matrices only have three possible ranks: zero, one, or two. This means all 2x2 matrices fit into one of three matrix equivalent classes: [1]

, ,

This means all 2x2 matrices are equivalent to one of these matrices. There is only one zero rank matrix, but the other two classes have infinitely many members; The representative matrices above are the simplest matrix for each class.

Matrix Similarity

Matrix similarity is a special case of matrix equivalence. If two matrices are similar then they are also equivalent. However, the converse is not true. [2] For example these two matrices are equivalent but not similar:

,

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Matrix addition</span> Notions of sums for matrices in linear algebra

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

<span class="mw-page-title-main">Scalar multiplication</span> Algebraic operation

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra, a nilpotent matrix is a square matrix N such that

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In linear algebra, two matrices are row equivalent if one can be changed to the other by a sequence of elementary row operations. Alternatively, two m × n matrices are row equivalent if and only if they have the same row space. The concept is most commonly applied to matrices that represent systems of linear equations, in which case two matrices of the same size are row equivalent if and only if the corresponding homogeneous systems have the same set of solutions, or equivalently the matrices have the same null space.

In mathematics a positive map is a map between C*-algebras that sends positive elements to positive elements. A completely positive map is one which satisfies a stronger, more robust condition.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

A Frobenius matrix is a special kind of square matrix from numerical mathematics. A matrix is a Frobenius matrix if it has the following three properties:

In mathematics, a Generalized Clifford algebra (GCA) is a unital associative algebra that generalizes the Clifford algebra, and goes back to the work of Hermann Weyl, who utilized and formalized these clock-and-shift operators introduced by J. J. Sylvester (1882), and organized by Cartan (1898) and Schwinger.

In abstract algebra, a matrix field is a field with matrices as elements. In field theory there are two types of fields: finite fields and infinite fields. There are several examples of matrix fields of different characteristic and cardinality.

References

  1. 1 2 3 4 Hefferon, Jim. Linear Algebra (4th ed.). pp. 270–272. Creative Commons by-sa small.svg  This article incorporates textfrom this source, which is available under the CC BY-SA 3.0 license.
  2. Hefferon, Jim. Linear Algebra (4th ed.). p. 405. Creative Commons by-sa small.svg  This article incorporates textfrom this source, which is available under the CC BY-SA 3.0 license.