Polar decomposition

Last updated

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size. [1]

Contents

If a real matrix is interpreted as a linear transformation of -dimensional space , the polar decomposition separates it into a rotation or reflection of , and a scaling of the space along a set of orthogonal axes.

The polar decomposition of a square matrix always exists. If is invertible, the decomposition is unique, and the factor will be positive-definite. In that case, can be written uniquely in the form , where is unitary and is the unique self-adjoint logarithm of the matrix . [2] This decomposition is useful in computing the fundamental group of (matrix) Lie groups. [3]

The polar decomposition can also be defined as where is a symmetric positive-definite matrix with the same eigenvalues as but different eigenvectors.

The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a complex number as , where is its absolute value (a non-negative real number), and is a complex number with unit norm (an element of the circle group).

The definition may be extended to rectangular matrices by requiring to be a semi-unitary matrix and to be a positive-semidefinite Hermitian matrix. The decomposition always exists and is always unique. The matrix is unique if and only if has full rank. [4]

Geometric interpretation

A real square matrix can be interpreted as the linear transformation of that takes a column vector to . Then, in the polar decomposition , the factor is an real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by into a scaling of the space along each eigenvector of by a scale factor (the action of ), followed by a rotation of (the action of ).

Alternatively, the decomposition expresses the transformation defined by as a rotation () followed by a scaling () along certain orthogonal directions. The scale factors are the same, but the directions are different.

Properties

The polar decomposition of the complex conjugate of is given by Note that

gives the corresponding polar decomposition of the determinant of A, since and . In particular, if has determinant 1 then both and have determinant 1. The positive-semidefinite matrix P is always unique, even if A is singular, and is denoted as

where denotes the conjugate transpose of . The uniqueness of P ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian square root. [5] If A is invertible, then P is positive-definite, thus also invertible and the matrix U is uniquely determined by

Relation to the SVD

In terms of the singular value decomposition (SVD) of , , one has

where , , and are unitary matrices (called orthogonal matrices if the field is the reals ). This confirms that is positive-definite and is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition. One can also decompose in the form

Here is the same as before and is given by

This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.

The polar decomposition of a square invertible real matrix is of the form

where is a positive-definite matrix and is an orthogonal matrix.

Relation to normal matrices

The matrix with polar decomposition is normal if and only if and commute: , or equivalently, they are simultaneously diagonalizable.

Construction and proofs of existence

The core idea behind the construction of the polar decomposition is similar to that used to compute the singular-value decomposition.

Derivation for normal matrices

If is normal, then it is unitarily equivalent to a diagonal matrix: for some unitary matrix and some diagonal matrix . This makes the derivation of its polar decomposition particularly straightforward, as we can then write

where is a diagonal matrix containing the phases of the elements of , that is, when , and when .

The polar decomposition is thus , with and diagonal in the eigenbasis of and having eigenvalues equal to the phases and absolute values of those of , respectively.

Derivation for invertible matrices

From the singular-value decomposition, it can be shown that a matrix is invertible if and only if (equivalently, ) is. Moreover, this is true if and only if the eigenvalues of are all not zero. [6]

In this case, the polar decomposition is directly obtained by writing

and observing that is unitary. To see this, we can exploit the spectral decomposition of to write .

In this expression, is unitary because is. To show that also is unitary, we can use the SVD to write , so that

where again is unitary by construction.

Yet another way to directly show the unitarity of is to note that, writing the SVD of in terms of rank-1 matrices as , where are the singular values of , we have

which directly implies the unitarity of because a matrix is unitary if and only if its singular values have unitary absolute value.

Note how, from the above construction, it follows that the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.

General derivation

The SVD of a square matrix reads , with unitary matrices, and a diagonal, positive semi-definite matrix. By simply inserting an additional pair of s or s, we obtain the two forms of the polar decomposition of :

More generally, if is some rectangular matrix, its SVD can be written as where now and are isometries with dimensions and , respectively, where , and is again a diagonal positive semi-definite square matrix with dimensions . We can now apply the same reasoning used in the above equation to write , but now is not in general unitary. Nonetheless, has the same support and range as , and it satisfies and . This makes into an isometry when its action is restricted onto the support of , that is, it means that is a partial isometry. As an explicit example of this more general case, consider the SVD of the following matrix:

We then have

which is an isometry, but not unitary. On the other hand, if we consider the decomposition of

we find

which is a partial isometry (but not an isometry).

Bounded operators on Hilbert space

The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.

The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.

The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shift on l2(N), then |A| = {A*A}1/2 = I. So if A = U |A|, U must be A, which is not unitary.

The existence of a polar decomposition is a consequence of Douglas' lemma:

Lemma  If A, B are bounded operators on a Hilbert space H, and A*AB*B, then there exists a contraction C such that A = CB. Furthermore, C is unique if ker(B*) ker(C).

The operator C can be defined by C(Bh) := Ah for all h in H, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement to all of H. The lemma then follows since A*AB*B implies ker(B) ⊂ ker(A).

In particular. If A*A = B*B, then C is a partial isometry, which is unique if ker(B*) ⊂ ker(C). In general, for any bounded operator A,

where (A*A)1/2 is the unique positive square root of A*A given by the usual functional calculus. So by the lemma, we have

for some partial isometry U, which is unique if ker(A*) ⊂ ker(U). Take P to be (A*A)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U', where P' is positive and U' a partial isometry.

When H is finite-dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.

By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: U is in the von Neumann algebra generated by A. If A is invertible, the polar part U will be in the C*-algebra as well.

Unbounded operators

If A is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decomposition

where |A| is a (possibly unbounded) non-negative self adjoint operator with the same domain as A, and U is a partial isometry vanishing on the orthogonal complement of the range ran(|A|).

The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(A*A) = dom(B*B) and A*Ah = B*Bh for all h ∈ dom(A*A), then there exists a partial isometry U such that A = UB. U is unique if ran(B) ⊂ ker(U). The operator A being closed and densely defined ensures that the operator A*A is self-adjoint (with dense domain) and therefore allows one to define (A*A)1/2. Applying the lemma gives polar decomposition.

If an unbounded operator A is affiliated to a von Neumann algebra M, and A = UP is its polar decomposition, then U is in M and so is the spectral projection of P, 1B(P), for any Borel set B in [0, ∞).

Quaternion polar decomposition

The polar decomposition of quaternions H depends on the unit 2-dimensional sphere of square roots of minus one, known as right versors. Given any r on this sphere, and an angle −π < aπ, the versor is on the unit 3-sphere of H. For a = 0 and a = π, the versor is 1 or −1 regardless of which r is selected. The norm t of a quaternion q is the Euclidean distance from the origin to q. When a quaternion is not just a real number, then there is a unique polar decomposition:

. Here r, a, t are all uniquely determined such that r is a right versor (r2 = –1), a satisfies 0 < a < π, and t > 0 .

Alternative planar decompositions

In the Cartesian plane, alternative planar ring decompositions arise as follows:

Numerical determination of the matrix polar decomposition

To compute an approximation of the polar decomposition A = UP, usually the unitary factor U is approximated. [8] [9] The iteration is based on Heron's method for the square root of 1 and computes, starting from , the sequence

The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.

This basic iteration may be refined to speed up the process:

See also

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

<span class="mw-page-title-main">Four-vector</span> 4-dimensional vector in relativity

In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

<span class="mw-page-title-main">Singular value</span> Square roots of the eigenvalues of the self-adjoint operator

In mathematics, in particular functional analysis, the singular values of a compact operator acting between Hilbert spaces and , are the square roots of the eigenvalues of the self-adjoint operator .

In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.

In mathematical functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.

<span class="mw-page-title-main">Representation theory of the Lorentz group</span> Representation of the symmetry group of spacetime in special relativity

The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. This group can be realized as a collection of matrices, linear transformations, or unitary operators on some Hilbert space; it has a variety of representations. This group is significant because special relativity together with quantum mechanics are the two physical theories that are most thoroughly established, and the conjunction of these two theories is the study of the infinite-dimensional unitary representations of the Lorentz group. These have both historical importance in mainstream physics, as well as connections to more speculative present-day theories.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In quantum mechanics, notably in quantum information theory, fidelity is a measure of the "closeness" of two quantum states. It expresses the probability that one state will pass a test to identify as the other. The fidelity is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.

In pure and applied mathematics, particularly quantum mechanics and computer graphics and their applications, a spherical basis is the basis used to express spherical tensors. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions.

References

  1. Hall 2015 Section 2.5
  2. Hall 2015 Theorem 2.17
  3. Hall 2015 Section 13.3
  4. Higham, Nicholas J.; Schreiber, Robert S. (1990). "Fast polar decomposition of an arbitrary matrix". SIAM J. Sci. Stat. Comput. 11 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 648–655. CiteSeerX   10.1.1.111.9239 . doi:10.1137/0911038. ISSN   0196-5204. S2CID   14268409.
  5. Hall 2015 Lemma 2.18
  6. Note how this implies, by the positivity of , that the eigenvalues are all real and strictly positive.
  7. Sobczyk, G.(1995) "Hyperbolic Number Plane", College Mathematics Journal 26:268–80
  8. Higham, Nicholas J. (1986). "Computing the polar decomposition with applications". SIAM J. Sci. Stat. Comput. 7 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 1160–1174. CiteSeerX   10.1.1.137.7354 . doi:10.1137/0907079. ISSN   0196-5204.
  9. Byers, Ralph; Hongguo Xu (2008). "A New Scaling for Newton's Iteration for the Polar Decomposition and its Backward Stability". SIAM J. Matrix Anal. Appl. 30 (2). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 822–843. CiteSeerX   10.1.1.378.6737 . doi:10.1137/070699895. ISSN   0895-4798.