In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix generated from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
If A is a square matrix, then the minor of the entry in the i-th row and j-th column (also called the (i, j)minor, or a first minor [1] ) is the determinant of the submatrix formed by deleting the i-th row and j-th column. This number is often denoted Mi, j. The (i, j)cofactor is obtained by multiplying the minor by (−1)i + j.
To illustrate these definitions, consider the following 3 × 3 matrix,
To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed.
So the cofactor of the (2,3) entry is
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × kminor of A, also called minor determinant of order k of A or, if m = n, the (n − k)thminor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m − k rows and n − k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m − k rows and n − k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix. [2] [3]
Let be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes. The minor corresponding to these choices of indexes is denoted or or or or or (where the (i) denotes the sequence of indexes I, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes I and J, some authors [4] mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in I and columns whose indexes are in J, whereas some other authors mean by a minor associated to I and J the determinant of the matrix formed from the original matrix by deleting the rows in I and columns in J; [2] which notation is used should always be checked. In this article, we use the inclusive definition of choosing the elements from rows of I and columns of J. The exceptional case is the case of the first minor or the (i, j)-minor described above; in that case, the exclusive meaning is standard everywhere in the literature and is used in this article also.
The complement Bijk..., pqr... of a minor Mijk..., pqr... of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk..., pqr... have been removed. The complement of the first minor of an element aij is merely that element. [5]
The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an n × n matrix A = (aij), the determinant of A, denoted det(A), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining then the cofactor expansion along the j-th column gives:
The cofactor expansion along the i-th row gives:
One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or, sometimes, comatrix):
Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of A:
The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A.
The above formula can be generalized as follows: Let be ordered sequences (in natural order) of indexes (here A is an n × n matrix). Then [6]
where I′, J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to I, J, so that every index 1, ..., n appears exactly once in either I or I', but not in both (similarly for the J and J') and [A]I, J denotes the determinant of the submatrix of A formed by choosing the rows of the index set J and columns of index set J. Also, A simple proof can be given using wedge product. Indeed,
where are the basis vectors. Acting by A on both sides, one gets
The sign can be worked out to be so the sign is determined by the sums of elements in I and J.
Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.
We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1, ..., m} with k elements, and J is a subset of {1, ..., n} with k elements, then we write [A]I, J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J.
Both the formula for ordinary matrix multiplication and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1, ..., m} with k elements and J is a subset of {1, ..., p} with k elements. Then where the sum extends over all subsets K of {1, ..., n} with k elements. This formula is a straightforward extension of the Cauchy–Binet formula.
A more systematic, algebraic treatment of minors is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the k-th exterior power map.
If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and alternating, and antisymmetric, we can simplify this expression to where the coefficients agree with the minors computed earlier.
In some books, instead of cofactor the term adjunct is used. [7] Moreover, it is denoted as Aij and defined in the same way as cofactor:
Using this notation the inverse matrix is written this way:
Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In linear algebra, the adjugate or classical adjoint of a square matrix A, adj(A), is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. An invertible matrix multiplied by its inverse yields the identity matrix. Invertible matrices are the same size as their inverse.
In mathematics, the exterior algebra or Grassmann algebra of a vector space is an associative algebra that contains which has a product, called exterior product or wedge product and denoted with , such that for every vector in The exterior algebra is named after Hermann Grassmann, and the names of the product come from the "wedge" symbol and the fact that the product of two elements of is "outside"
In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the graph's Laplacian matrix; specifically, the number is equal to any cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph.
In mathematics, Dodgson condensation or method of contractants is a method of computing the determinants of square matrices. It is named for its inventor, Charles Lutwidge Dodgson (better known by his pseudonym, as Lewis Carroll, the popular author), who discovered it in 1866. The method in the case of an n × n matrix is to construct an (n − 1) × (n − 1) matrix, an (n − 2) × (n − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix.
In mathematics, a totally positive matrix is a square matrix in which all the minors are positive: that is, the determinant of every square submatrix is a positive number. A totally positive matrix has all entries positive, so it is also a positive matrix; and it has all principal minors positive. A symmetric totally positive matrix is therefore also positive-definite. A totally non-negative matrix is defined similarly, except that all the minors must be non-negative. Some authors use "totally positive" to include all totally non-negative matrices.
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n-matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1)-submatrices of B. Specifically, for every i, the Laplace expansion along the ith row is the equality where is the entry of the ith row and jth column of B, and is the determinant of the submatrix obtained by removing the ith row and the jth column of B. Similarly, the Laplace expansion along the jth column is the equality (Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. Compound matrices are closely related to exterior algebras, and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems.
In control theory, Ackermann's formula is a control system design method for solving the pole allocation problem for invariant-time systems by Jürgen Ackermann. One of the primary problems in control system design is the creation of controllers that will change the dynamics of a system by changing the eigenvalues of the matrix representing the dynamics of the closed-loop system. This is equivalent to changing the poles of the associated transfer function in the case that there is no cancellation of poles and zeros.