In linear algebra, two rectangular m-by-n matrices A and B are called equivalent if
for some invertible n-by-n matrix P and some invertible m-by-m matrix Q. Equivalent matrices represent the same linear transformation V → W under two different choices of a pair of bases of V and W, with P and Q being the change of basis matrices in V and W respectively.
The notion of equivalence should not be confused with that of similarity, which is only defined for square matrices, and is much more restrictive (similar matrices are certainly equivalent, but equivalent square matrices need not be similar). That notion corresponds to matrices representing the same endomorphism V → V under two different choices of a single basis of V, used both for initial vectors and their images.
Matrix equivalence is an equivalence relation on the space of rectangular matrices.
For two rectangular matrices of the same size, their equivalence can also be characterized by the following conditions
If matrices are row equivalent then they are also matrix equivalent. However, the converse does not hold; matrices that are matrix equivalent are not necessarily row equivalent. This makes matrix equivalence a generalization of row equivalence. [1]
The rank property yields an intuitive canonical form for matrices of the equivalence class of rank as
,
where the number of s on the diagonal is equal to . This is a special case of the Smith normal form, which generalizes this concept on vector spaces to free modules over principal ideal domains. Thus:
Theorem: Any mxn matrix of rank k is matrix equivalent to the mxn matrix that is all zeroes except that the first k diagonal entries are ones. [1] Corollary: Matrix equivalent classes are characterized by rank: two same-sided matrixes are matrix equivalent if and only if they have the same rank. [1]
2x2 matrices only have three possible ranks: zero, one, or two. This means all 2x2 matrices fit into one of three matrix equivalent classes: [1]
, ,
This means all 2x2 matrices are equivalent to one of these matrices. There is only one zero rank matrix, but the other two classes have infinitely many members; The representative matrices above are the simplest matrix for each class.
Matrix similarity is a special case of matrix equivalence. If two matrices are similar then they are also equivalent. However, the converse is not true. [2] For example these two matrices are equivalent but not similar:
,
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such thatwhere In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix.
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In linear algebra, a nilpotent matrix is a square matrix N such that
In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
In linear algebra, two matrices are row equivalent if one can be changed to the other by a sequence of elementary row operations. Alternatively, two m × n matrices are row equivalent if and only if they have the same row space. The concept is most commonly applied to matrices that represent systems of linear equations, in which case two matrices of the same size are row equivalent if and only if the corresponding homogeneous systems have the same set of solutions, or equivalently the matrices have the same null space.
In mathematics a positive map is a map between C*-algebras that sends positive elements to positive elements. A completely positive map is one which satisfies a stronger, more robust condition.
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.
A Frobenius matrix is a special kind of square matrix from numerical analysis. A matrix is a Frobenius matrix if it has the following three properties:
In mathematics, a Generalized Clifford algebra (GCA) is a unital associative algebra that generalizes the Clifford algebra, and goes back to the work of Hermann Weyl, who utilized and formalized these clock-and-shift operators introduced by J. J. Sylvester (1882), and organized by Cartan (1898) and Schwinger.