Symmetric matrix

Last updated

Symmetry of a 5x5 matrix Matrix symmetry qtl1.svg
Symmetry of a 5×5 matrix

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

Contents

Because equal matrices have equal dimensions, only square matrices can be symmetric.

The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if denotes the entry in the th row and th column then

for all indices and

Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.

In linear algebra, a real symmetric matrix represents a self-adjoint operator [1] represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.

Example

The following matrix is symmetric: Since .

Properties

Basic properties

Decomposition into symmetric and skew-symmetric

Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let denote the space of matrices. If denotes the space of symmetric matrices and the space of skew-symmetric matrices then and , i.e. where denotes the direct sum. Let then

Notice that and . This is true for every square matrix with entries from any field whose characteristic is different from 2.

A symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Similarly, a skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal).

Matrix congruent to a symmetric matrix

Any matrix congruent to a symmetric matrix is again symmetric: if is a symmetric matrix, then so is for any matrix .

Symmetry implies normality

A (real-valued) symmetric matrix is necessarily a normal matrix.

Real symmetric matrices

Denote by the standard inner product on . The real matrix is symmetric if and only if

Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangent space to a manifold may be endowed with an inner product, giving rise to what is called a Riemannian manifold. Another area where this formulation is used is in Hilbert spaces.

The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every real symmetric matrix there exists a real orthogonal matrix such that is a diagonal matrix. Every real symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix.

If and are real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix: [2] there exists a basis of such that every element of the basis is an eigenvector for both and .

Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix (above), and therefore is uniquely determined by up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.

Complex symmetric matrices

A complex symmetric matrix can be 'diagonalized' using a unitary matrix: thus if is a complex symmetric matrix, there is a unitary matrix such that is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by Léon Autonne (1915) and Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians. [3] [4] In fact, the matrix is Hermitian and positive semi-definite, so there is a unitary matrix such that is diagonal with non-negative real entries. Thus is complex symmetric with real. Writing with and real symmetric matrices, . Thus . Since and commute, there is a real orthogonal matrix such that both and are diagonal. Setting (a unitary matrix), the matrix is complex diagonal. Pre-multiplying by a suitable diagonal unitary matrix (which preserves unitarity of ), the diagonal entries of can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as . The matrix we seek is simply given by . Clearly as desired, so we make the modification . Since their squares are the eigenvalues of , they coincide with the singular values of . (Note, about the eigen-decomposition of a complex symmetric matrix , the Jordan normal form of may not be diagonal, therefore may not be diagonalized by any similarity transformation.)

Decomposition

Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices. [5]

Every real non-singular matrix can be uniquely factored as the product of an orthogonal matrix and a symmetric positive definite matrix, which is called a polar decomposition. Singular matrices can also be factored, but not uniquely.

Cholesky decomposition states that every real positive-definite symmetric matrix is a product of a lower-triangular matrix and its transpose,

If the matrix is symmetric indefinite, it may be still decomposed as where is a permutation matrix (arising from the need to pivot), a lower unit triangular matrix, and is a direct sum of symmetric and blocks, which is called Bunch–Kaufman decomposition [6]

A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If is diagonalizable it may be decomposed as where is an orthogonal matrix , and is a diagonal matrix of the eigenvalues of . In the special case that is real symmetric, then and are also real. To see orthogonality, suppose and are eigenvectors corresponding to distinct eigenvalues , . Then

Since and are distinct, we have .

Hessian

Symmetric matrices of real functions appear as the Hessians of twice differentiable functions of real variables (the continuity of the second derivative is not needed, despite common belief to the opposite [7] ).

Every quadratic form on can be uniquely written in the form with a symmetric matrix . Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of , "looks like" with real numbers . This considerably simplifies the study of quadratic forms, as well as the study of the level sets which are generalizations of conic sections.

This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of Taylor's theorem.

Symmetrizable matrix

An matrix is said to be symmetrizable if there exists an invertible diagonal matrix and symmetric matrix such that

The transpose of a symmetrizable matrix is symmetrizable, since and is symmetric. A matrix is symmetrizable if and only if the following conditions are met:

  1. implies for all
  2. for any finite sequence

See also

Other types of symmetry or pattern in square matrices have special names; see for example:

See also symmetry in mathematics.

Notes

  1. Jesús Rojo García (1986). Álgebra lineal (in Spanish) (2nd ed.). Editorial AC. ISBN   84-7288-120-2.
  2. Bellman, Richard (1997). Introduction to Matrix Analysis (2nd ed.). SIAM. ISBN   08-9871-399-4.
  3. Horn & Johnson 2013 , pp. 263, 278
  4. See:
  5. Bosch, A. J. (1986). "The factorization of a square matrix into two symmetric matrices". American Mathematical Monthly . 93 (6): 462–464. doi:10.2307/2323471. JSTOR   2323471.
  6. Golub, G.H.; van Loan, C.F. (1996). Matrix Computations. Johns Hopkins University Press. ISBN   0-8018-5413-X. OCLC   34515797.
  7. Dieudonné, Jean A. (1969). "Theorem (8.12.2)". Foundations of Modern Analysis. Academic Press. p. 180. ISBN   0-12-215550-5. OCLC   576465.

Related Research Articles

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

<span class="mw-page-title-main">Symplectic group</span> Mathematical group

In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.

In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition

<span class="mw-page-title-main">Unitary group</span> Group of unitary matrices

In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C), and it has as a subgroup the special unitary group, consisting of those unitary matrices with determinant 1.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with a ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

References