Weyl's inequality

Last updated

In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix.

Contents

Weyl's inequality about perturbation

Let be Hermitian on inner product space with dimension , with spectrum ordered in descending order . Note that these eigenvalues can be ordered, because they are real (as eigenvalues of Hermitian matrices). [1]

Weyl inequality  
Proof

By the min-max theorem, it suffices to show that any with dimension , there exists a unit vector such that .

By the min-max principle, there exists some with codimension , such that
Similarly, there exists such a with codimension . Now has codimension , so it has nontrivial intersection with . Let , and we have the desired vector.

The second one is a corollary of the first, by taking the negative.

Weyl's inequality states that the spectrum of Hermitian matrices is stable under perturbation. Specifically, we have: [1]

Corollary (Spectral stability)  
where
is the operator norm.

In jargon, it says that is Lipschitz-continuous on the space of Hermitian matrices with operator norm.

Weyl's inequality between eigenvalues and singular values

Let have singular values and eigenvalues ordered so that . Then

For , with equality for . [2]

Applications

Estimating perturbations of the spectrum

Assume that is small in the sense that its spectral norm satisfies for some small . Then it follows that all the eigenvalues of are bounded in absolute value by . Applying Weyl's inequality, it follows that the spectra of the Hermitian matrices M and N are close in the sense that [3]

Note, however, that this eigenvalue perturbation bound is generally false for non-Hermitian matrices (or more accurately, for non-normal matrices). For a counterexample, let be arbitrarily small, and consider

whose eigenvalues and do not satisfy .

Weyl's inequality for singular values

Let be a matrix with . Its singular values are the positive eigenvalues of the Hermitian augmented matrix

Therefore, Weyl's eigenvalue perturbation inequality for Hermitian matrices extends naturally to perturbation of singular values. [1] This result gives the bound for the perturbation in the singular values of a matrix due to an additive perturbation :

where we note that the largest singular value coincides with the spectral norm .

Notes

  1. 1 2 3 Tao, Terence (2010-01-13). "254A, Notes 3a: Eigenvalues and sums of Hermitian matrices". Terence Tao's blog. Retrieved 25 May 2015.
  2. Roger A. Horn, and Charles R. Johnson Topics in Matrix Analysis. Cambridge, 1st Edition, 1991. p.171
  3. Weyl, Hermann. "Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung)." Mathematische Annalen 71, no. 4 (1912): 441-479.

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

<span class="mw-page-title-main">Singular value</span> Square roots of the eigenvalues of the self-adjoint operator

In mathematics, in particular functional analysis, the singular values of a compact operator acting between Hilbert spaces and , are the square roots of the eigenvalues of the self-adjoint operator .

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the properties of the Pauli matrices. Here, a few classes of such matrices are summarized.

<span class="mw-page-title-main">Marchenko–Pastur distribution</span> Distribution of singular values of large rectangular random matrices

In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Ukrainian mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967.

In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.

<span class="mw-page-title-main">Generalized pencil-of-function method</span>

Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.

References