In mathematics, Specht's theorem gives a necessary and sufficient condition for two complex matrices to be unitarily equivalent. It is named after Wilhelm Specht, who proved the theorem in 1940. [1]
Two matrices A and B with complex number entries are said to be unitarily equivalent if there exists a unitary matrix U such that B = U *AU. [2] Two matrices which are unitarily equivalent are also similar. Two similar matrices represent the same linear map, but with respect to a different basis; unitary equivalence corresponds to a change from an orthonormal basis to another orthonormal basis.
If A and B are unitarily equivalent, then tr AA* = tr BB*, where tr denotes the trace (in other words, the Frobenius norm is a unitary invariant). This follows from the cyclic invariance of the trace: if B = U *AU, then tr BB* = tr U *AUU *A*U = tr AUU *A*UU * = tr AA*, where the second equality is cyclic invariance. [3]
Thus, tr AA* = tr BB* is a necessary condition for unitary equivalence, but it is not sufficient. Specht's theorem gives infinitely many necessary conditions which together are also sufficient. The formulation of the theorem uses the following definition. A word in two variables, say x and y, is an expression of the form
where m1, n1, m2, n2, …, mp are non-negative integers. The degree of this word is
Specht's theorem: Two matrices A and B are unitarily equivalent if and only if tr W(A, A*) = tr W(B, B*) for all words W. [4]
The theorem gives an infinite number of trace identities, but it can be reduced to a finite subset. Let n denote the size of the matrices A and B. For the case n = 2, the following three conditions are sufficient: [5]
For n = 3, the following seven conditions are sufficient:
For general n, it suffices to show that tr W(A, A*) = tr W(B, B*) for all words of degree at most
It has been conjectured that this can be reduced to an expression linear in n. [8]
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.
In mathematics, a Lie algebra is a vector space together with an operation called the Lie bracket, an alternating bilinear map , that satisfies the Jacobi identity. Otherwise said, a Lie algebra is an algebra over a field where the multiplication operation is now called Lie bracket and has two additional properties: it is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors and is denoted . The Lie bracket does not need to be associative, meaning that the Lie algebra can be non associative. Given an associative algebra, a Lie bracket can be and is often defined through the commutator, namely defining correctly defines a Lie bracket in addition to the already existing multiplication operation.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C). Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group.
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In mathematics, especially functional analysis, a normal operator on a complex Hilbert space H is a continuous linear operator N : H → H that commutes with its hermitian adjoint N*, that is: NN* = N*N.
In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In mathematics and theoretical physics, a representation of a Lie group is a linear action of a Lie group on a vector space. Equivalently, a representation is a smooth homomorphism of the group into the group of invertible operators on the vector space. Representations play an important role in the study of continuous symmetry. A great deal is known about such representations, a basic tool in their study being the use of the corresponding 'infinitesimal' representations of Lie algebras.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.
In linear algebra, two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix P such that
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation
In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.
In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.
In mathematics and functional analysis a direct integral or Hilbert integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in 1949 by John von Neumann in one of the papers in the series On Rings of Operators. One of von Neumann's goals in this paper was to reduce the classification of von Neumann algebras on separable Hilbert spaces to the classification of so-called factors. Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings.