In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. [1] [2] [3] [4] Compound matrices are closely related to exterior algebras, [5] and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems. [4] [6]
Let A be an m × n matrix with real or complex entries. [lower-alpha 1] If I is a subset of size r of {1, ..., m} and J is a subset of size s of {1, ..., n}, then the (I, J )-submatrix of A, written AI, J , is the submatrix formed from A by retaining only those rows indexed by I and those columns indexed by J. If r = s, then det AI, J is the (I, J )-minor of A.
The r th compound matrix of A is a matrix, denoted Cr (A), is defined as follows. If r > min(m, n), then Cr (A) is the unique 0 × 0 matrix. Otherwise, Cr (A) has size . Its rows and columns are indexed by r-element subsets of {1, ..., m} and {1, ..., n}, respectively, in their lexicographic order. The entry corresponding to subsets I and J is the minor det AI, J.
In some applications of compound matrices, the precise ordering of the rows and columns is unimportant. For this reason, some authors do not specify how the rows and columns are to be ordered. [7]
For example, consider the matrix
The rows are indexed by {1, 2, 3} and the columns by {1, 2, 3, 4}. Therefore, the rows of C2 (A) are indexed by the sets
and the columns are indexed by
Using absolute value bars to denote determinants, the second compound matrix is
Let c be a scalar, A be an m × n matrix, and B be an n × p matrix. For k a positive integer, let Ik denote the k × k identity matrix. The transpose of a matrix M will be written MT, and the conjugate transpose by M*. Then: [8]
Assume in addition that A is a square matrix of size n. Then: [9]
Give Rn the standard coordinate basis e1, ..., en. The r th exterior power of Rn is the vector space
whose basis consists of the formal symbols
where
Suppose that A is an m × n matrix. Then A corresponds to a linear transformation
Taking the r th exterior power of this linear transformation determines a linear transformation
The matrix corresponding to this linear transformation (with respect to the above bases of the exterior powers) is Cr (A). Taking exterior powers is a functor, which means that [12]
This corresponds to the formula Cr (AB) = Cr (A)Cr (B). It is closely related to, and is a strengthening of, the Cauchy–Binet formula.
Let A be an n × n matrix. Recall that its r th higher adjugate matrixadjr (A) is the matrix whose (I, J ) entry is
where, for any set K of integers, σ(K) is the sum of the elements of K. The adjugate of A is its 1st higher adjugate and is denoted adj(A). The generalized Laplace expansion formula implies
If A is invertible, then
A concrete consequence of this is Jacobi's formula for the minors of an inverse matrix:
Adjugates can also be expressed in terms of compounds. Let S denote the sign matrix:
and let J denote the exchange matrix :
Then Jacobi's theorem states that the r th higher adjugate matrix is: [13] [14]
It follows immediately from Jacobi's theorem that
Taking adjugates and compounds does not commute. However, compounds of adjugates can be expressed using adjugates of compounds, and vice versa. From the identities
and the Sylvester-Franke theorem, we deduce
The same technique leads to an additional identity,
Compound and adjugate matrices appear when computing determinants of linear combinations of matrices. It is elementary to check that if A and B are n × n matrices then
It is also true that: [15] [16]
This has the immediate consequence
In general, the computation of compound matrices is non-effective due to its high complexity. Nonetheless, there are some efficient algorithms available for real matrices with special structure. [17]
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In linear algebra, the trace of a square matrix A, denoted tr(A), is the sum of the elements on its main diagonal, . It is only defined for a square matrix.
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.
In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition
In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.
In linear algebra, the adjugate or classical adjoint of a square matrix A, adj(A), is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an complex matrix is an matrix obtained by transposing and applying complex conjugation to each entry. There are several notations, such as or , , or .
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix generated from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
In mathematics, the indefinite orthogonal group, O(p, q) is the Lie group of all linear transformations of an n-dimensional real vector space that leave invariant a nondegenerate, symmetric bilinear form of signature (p, q), where n = p + q. It is also called the pseudo-orthogonal group or generalized orthogonal group. The dimension of the group is n(n − 1)/2.
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative of A.
In mathematics, particularly matrix theory and combinatorics, a Pascal matrix is a matrix containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5 × 5 matrices are:
In mathematics, especially linear algebra, the exchange matrices are special cases of permutation matrices, where the 1 elements reside on the antidiagonal and all other elements are zero. In other words, they are 'row-reversed' or 'column-reversed' versions of the identity matrix.
In mathematics, in particular linear algebra, the matrix determinant lemma computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT.