Compound matrix

Last updated

In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. [1] [2] [3] [4] Compound matrices are closely related to exterior algebras, [5] and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems. [4] [6]

Contents

Definition

Let A be an m×n matrix with real or complex entries. [lower-alpha 1] If I is a subset of size r of {1, ..., m} and J is a subset of size s of {1, ..., n}, then the (I, J)-submatrix of A, written AI, J, is the submatrix formed from A by retaining only those rows indexed by I and those columns indexed by J. If r = s, then detAI, J is the (I, J)-minor of A.

The rth compound matrix of A is a matrix, denoted Cr(A), is defined as follows. If r > min(m, n), then Cr(A) is the unique 0×0 matrix. Otherwise, Cr(A) has size . Its rows and columns are indexed by r-element subsets of {1, ..., m} and {1, ..., n}, respectively, in their lexicographic order. The entry corresponding to subsets I and J is the minor detAI, J.

In some applications of compound matrices, the precise ordering of the rows and columns is unimportant. For this reason, some authors do not specify how the rows and columns are to be ordered. [7]

For example, consider the matrix

The rows are indexed by {1, 2, 3} and the columns by {1, 2, 3, 4}. Therefore, the rows of C2(A) are indexed by the sets

and the columns are indexed by

Using absolute value bars to denote determinants, the second compound matrix is

Properties

Let c be a scalar, A be an m×n matrix, and B be an n×p matrix. For k a positive integer, let Ik denote the k×k identity matrix. The transpose of a matrix M will be written MT, and the conjugate transpose by M*. Then: [8]

Assume in addition that A is a square matrix of size n. Then: [9]

Relation to exterior powers

Give Rn the standard coordinate basis e1, ..., en. The rth exterior power of Rn is the vector space

whose basis consists of the formal symbols

where

Suppose that A is an m×n matrix. Then A corresponds to a linear transformation

Taking the rth exterior power of this linear transformation determines a linear transformation

The matrix corresponding to this linear transformation (with respect to the above bases of the exterior powers) is Cr(A). Taking exterior powers is a functor, which means that [12]

This corresponds to the formula Cr(AB) = Cr(A)Cr(B). It is closely related to, and is a strengthening of, the Cauchy–Binet formula.

Relation to adjugate matrices

Let A be an n×n matrix. Recall that its rth higher adjugate matrixadjr(A) is the matrix whose (I, J) entry is

where, for any set K of integers, σ(K) is the sum of the elements of K. The adjugate of A is its 1st higher adjugate and is denoted adj(A). The generalized Laplace expansion formula implies

If A is invertible, then

A concrete consequence of this is Jacobi's formula for the minors of an inverse matrix:

Adjugates can also be expressed in terms of compounds. Let S denote the sign matrix:

and let J denote the exchange matrix :

Then Jacobi's theorem states that the rth higher adjugate matrix is: [13] [14]

It follows immediately from Jacobi's theorem that

Taking adjugates and compounds does not commute. However, compounds of adjugates can be expressed using adjugates of compounds, and vice versa. From the identities

and the Sylvester-Franke theorem, we deduce

The same technique leads to an additional identity,

Compound and adjugate matrices appear when computing determinants of linear combinations of matrices. It is elementary to check that if A and B are n×n matrices then

It is also true that: [15] [16]

This has the immediate consequence

Numerical computation

In general, the computation of compound matrices is non-effective due to its high complexity. Nonetheless, there are some efficient algorithms available for real matrices with special structure. [17]

Notes

  1. The definition, and the purely algebraic part of the theory, of compound matrices requires only that the matrix have entries in a commutative ring. In this case, the matrix corresponds to a homomorphism of finitely generated free modules.

Citations

  1. DeAlba, Luz M. Determinants and Eigenvalues in Hogben, Leslie (ed) Handbook of Linear Algebra, 2nd edition, CRC Press, 2013, ISBN   978-1-4665-0729-6, p. 4-4
  2. Gantmacher, F. R., The Theory of Matrices, volume I, Chelsea Publishing Company, 1959, ISBN   978-0-8218-1376-8p. 20
  3. Horn, Roger A. and Johnson, Charles R., Matrix Analysis, 2nd edition, Cambridge University Press, 2013, ISBN   978-0-521-54823-6, p. 21
  4. 1 2 Muldowney, James S. (1990). "Compound matrices and ordinary differential equations". Rocky Mountain Journal of Mathematics. 20 (4): 857–872. doi: 10.1216/rmjm/1181073047 . ISSN   0035-7596.
  5. D.L., Boutin; R.F. Gleeson; R.M. Williams (1996). Wedge Theory / Compound Matrices: Properties and Applications (PDF) (Technical report). Office of Naval Research. NAWCADPAX–96-220-TR. Archived (PDF) from the original on January 16, 2021.
  6. Bar-Shalom, Eyal; Dalin, Omri; Margaliot, Michael (2023-03-15). "Compound matrices in systems and control theory: a tutorial". Mathematics of Control, Signals, and Systems. 35 (3): 467–521. arXiv: 2204.00676 . doi:10.1007/s00498-023-00351-8. ISSN   0932-4194. S2CID   247939832.
  7. Kung, Rota, and Yan, p. 305.
  8. Horn and Johnson, p. 22.
  9. Horn and Johnson, pp. 22, 93, 147, 233.
  10. Tornheim, Leonard (1952). "The Sylvester–Franke Theorem". The American Mathematical Monthly. 59 (6): 389–391. doi:10.2307/2306811. ISSN   0002-9890. JSTOR   2306811.
  11. Harley Flanders (1953) "A Note on the Sylvester-Franke Theorem", American Mathematical Monthly 60: 543–5, MR 0057835
  12. Joseph P.S. Kung, Gian-Carlo Rota, and Catherine H. Yan, Combinatorics: The Rota Way , Cambridge University Press, 2009, p. 306. ISBN   9780521883894
  13. Nambiar, K.K.; Sreevalsan, S. (2001). "Compound matrices and three celebrated theorems". Mathematical and Computer Modelling. 34 (3–4): 251–255. doi: 10.1016/S0895-7177(01)00058-9 . ISSN   0895-7177.
  14. Price, G. B. (1947). "Some Identities in the Theory of Determinants". The American Mathematical Monthly. 54 (2): 75–90. doi:10.2307/2304856. ISSN   0002-9890. JSTOR   2304856.
  15. Prells, Uwe; Friswell, Michael I.; Garvey, Seamus D. (2003-02-08). "Use of geometric algebra: compound matrices and the determinant of the sum of two matrices". Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 459 (2030): 273–285. doi:10.1098/rspa.2002.1040. ISSN   1364-5021. S2CID   73593788.
  16. Horn and Johnson, p. 29
  17. Kravvaritis, Christos; Mitrouli, Marilena (2009-02-01). "Compound matrices: properties, numerical issues and analytical computations" (PDF). Numerical Algorithms. 50 (2): 155. doi:10.1007/s11075-008-9222-7. ISSN   1017-1398. S2CID   16067358.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

<span class="mw-page-title-main">Symplectic group</span> Mathematical group

In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.

In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an complex matrix is an matrix obtained by transposing and applying complex conjugation to each entry. There are several notations, such as or , , or .

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In mathematics, the indefinite orthogonal group, O(p, q) is the Lie group of all linear transformations of an n-dimensional real vector space that leave invariant a nondegenerate, symmetric bilinear form of signature (p, q), where n = p + q. It is also called the pseudo-orthogonal group or generalized orthogonal group. The dimension of the group is n(n − 1)/2.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852), who indirectly named them after Johann Friedrich Pfaff.

In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative of A.

In mathematics, particularly matrix theory and combinatorics, a Pascal matrix is a matrix containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5 × 5 matrices are:

In mathematics, especially linear algebra, the exchange matrices are special cases of permutation matrices, where the 1 elements reside on the antidiagonal and all other elements are zero. In other words, they are 'row-reversed' or 'column-reversed' versions of the identity matrix.

References