Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is a symmetric matrix, then for any invertible matrix , the number of positive, negative and zero eigenvalues (called the inertia of the matrix) of is constant. This result is particularly useful when is diagonal, as the inertia of a diagonal matrix can easily be obtained by looking at the sign of its diagonal elements.
This property is named after James Joseph Sylvester who published its proof in 1852. [1] [2]
Let be a symmetric square matrix of order with real entries. Any non-singular matrix of the same size is said to transform into another symmetric matrix , also of order , where is the transpose of . It is also said that matrices and are congruent. If is the coefficient matrix of some quadratic form of , then is the matrix for the same form after the change of basis defined by .
A symmetric matrix can always be transformed in this way into a diagonal matrix which has only entries , , along the diagonal. Sylvester's law of inertia states that the number of diagonal entries of each kind is an invariant of , i.e. it does not depend on the matrix used.
The number of s, denoted , is called the positive index of inertia of , and the number of s, denoted , is called the negative index of inertia. The number of s, denoted , is the dimension of the null space of , known as the nullity of . These numbers satisfy an obvious relation
The difference, , is usually called the signature of . (However, some authors use that term for the triple consisting of the nullity and the positive and negative indices of inertia of ; for a non-degenerate form of a given dimension these are equivalent data, but in general the triple yields more data.)
If the matrix has the property that every principal upper left minor is non-zero then the negative index of inertia is equal to the number of sign changes in the sequence
The law can also be stated as follows: two symmetric square matrices of the same size have the same number of positive, negative and zero eigenvalues if and only if they are congruent [3] (, for some non-singular ).
The positive and negative indices of a symmetric matrix are also the number of positive and negative eigenvalues of . Any symmetric real matrix has an eigendecomposition of the form where is a diagonal matrix containing the eigenvalues of , and is an orthonormal square matrix containing the eigenvectors. The matrix can be written where is diagonal with entries , and is diagonal with . The matrix transforms to .
In the context of quadratic forms, a real quadratic form in variables (or on an -dimensional real vector space) can by a suitable change of basis (by non-singular linear transformation from to ) be brought to the diagonal form
with each . Sylvester's law of inertia states that the number of coefficients of a given sign is an invariant of , i.e., does not depend on a particular choice of diagonalizing basis. Expressed geometrically, the law of inertia says that all maximal subspaces on which the restriction of the quadratic form is positive definite (respectively, negative definite) have the same dimension. These dimensions are the positive and negative indices of inertia.
Sylvester's law of inertia is also valid if and have complex entries. In this case, it is said that and are -congruent if and only if there exists a non-singular complex matrix such that , where denotes the conjugate transpose. In the complex scenario, a way to state Sylvester's law of inertia is that if and are Hermitian matrices, then and are -congruent if and only if they have the same inertia, the definition of which is still valid as the eigenvalues of Hermitian matrices are always real numbers.
Ostrowski proved a quantitative generalization of Sylvester's law of inertia: [4] [5] if and are -congruent with , then their eigenvalues are related by where are such that .
A theorem due to Ikramov generalizes the law of inertia to any normal matrices and : [6] If and are normal matrices, then and are congruent if and only if they have the same number of eigenvalues on each open ray from the origin in the complex plane.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of , and the diagonal entries of are the corresponding eigenvalues of ; with respect to this eigenvector basis, is represented by .
In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
In mathematics, the signature(v, p, r) of a metric tensor g (or equivalently, a real quadratic form thought of as a real symmetric bilinear form on a finite-dimensional vector space) is the number (counted with multiplicity) of positive, negative and zero eigenvalues of the real symmetric matrix gab of the metric tensor with respect to a basis. In relativistic physics, v conventionally represents the number of time or virtual dimensions, and p the number of space or physical dimensions. Alternatively, it can be defined as the dimensions of a maximal positive and null subspace. By Sylvester's law of inertia these numbers do not depend on the choice of basis and thus can be used to classify the metric. The signature is often denoted by a pair of integers (v, p) implying r = 0, or as an explicit list of signs of eigenvalues such as (+, −, −, −) or (−, +, +, +) for the signatures (1, 3, 0) and (3, 1, 0), respectively.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.
In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.
In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form