Matrix congruence

Last updated

In mathematics, two square matrices A and B over a field are called congruent if there exists an invertible matrix P over the same field such that

Contents

PTAP = B

where "T" denotes the matrix transpose. Matrix congruence is an equivalence relation.

Matrix congruence arises when considering the effect of change of basis on the Gram matrix attached to a bilinear form or quadratic form on a finite-dimensional vector space: two matrices are congruent if and only if they represent the same bilinear form with respect to different bases.

Note that Halmos defines congruence in terms of conjugate transpose (with respect to a complex inner product space) rather than transpose, [1] but this definition has not been adopted by most other authors.

Congruence over the reals

Sylvester's law of inertia states that two congruent symmetric matrices with real entries have the same numbers of positive, negative, and zero eigenvalues. That is, the number of eigenvalues of each sign is an invariant of the associated quadratic form. [2]

See also

Related Research Articles

In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector space and complex vector space are kinds of vector spaces based on different kinds of scalars: real coordinate space or complex coordinate space.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879).

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

<span class="mw-page-title-main">Transpose</span> Matrix operation which flips a matrix over its diagonal

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices.

In mathematics, a bilinear form is a bilinear map V × VK on a vector space V over a field K. In other words, a bilinear form is a function B : V × VK that is linear in each argument separately:

In mathematics, the signature(v, p, r) of a metric tensor g (or equivalently, a real quadratic form thought of as a real symmetric bilinear form on a finite-dimensional vector space) is the number (counted with multiplicity) of positive, negative and zero eigenvalues of the real symmetric matrix gab of the metric tensor with respect to a basis. In relativistic physics, v conventionally represents the number of time or virtual dimensions, and p the number of space or physical dimensions. Alternatively, it can be defined as the dimensions of a maximal positive and null subspace. By Sylvester's law of inertia these numbers do not depend on the choice of basis and thus can be used to classify the metric. The signature is often denoted by a pair of integers (v, p) implying r = 0, or as an explicit list of signs of eigenvalues such as (+, −, −, −) or (−, +, +, +) for the signatures (1, 3, 0) and (3, 1, 0), respectively.

In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a ring. If the algebra is not unital, it may be made so in a standard way ; there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is the symmetric matrix that defines the quadratic form, and is any invertible matrix such that is diagonal, then the number of negative elements in the diagonal of is always the same, for all such ; and the same goes for the number of positive elements.

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

References

  1. Halmos, Paul R. (1958). Finite dimensional vector spaces. van Nostrand. p. 134.
  2. Sylvester, J J (1852). "A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares" (PDF). Philosophical Magazine. IV: 138–142. Retrieved 2007-12-30.