Eigenvalue algorithm

Last updated

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

Contents

Eigenvalues and eigenvectors

Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation [1]

where v is a nonzero n × 1 column vector, I is the n × n identity matrix, k is a positive integer, and both λ and v are allowed to be complex even when A is real. When k = 1, the vector is called simply an eigenvector , and the pair is called an eigenpair. In this case, Av = λv. Any eigenvalue λ of A has ordinary [note 1] eigenvectors associated to it, for if k is the smallest integer such that (AλI)kv = 0 for a generalized eigenvector v, then (AλI)k−1v is an ordinary eigenvector. The value k can always be taken as less than or equal to n. In particular, (AλI)nv = 0 for all generalized eigenvectors v associated with λ.

For each eigenvalue λ of A, the kernel ker(AλI) consists of all eigenvectors associated with λ (along with 0), called the eigenspace of λ, while the vector space ker((AλI)n) consists of all generalized eigenvectors, and is called the generalized eigenspace . The geometric multiplicity of λ is the dimension of its eigenspace. The algebraic multiplicity of λ is the dimension of its generalized eigenspace. The latter terminology is justified by the equation

where det is the determinant function, the λi are all the distinct eigenvalues of A and the αi are the corresponding algebraic multiplicities. The function pA(z) is the characteristic polynomial of A. So the algebraic multiplicity is the multiplicity of the eigenvalue as a zero of the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up to n, the degree of the characteristic polynomial. The equation pA(z) = 0 is called the characteristic equation, as its roots are exactly the eigenvalues of A. By the Cayley–Hamilton theorem, A itself obeys the same equation: pA(A) = 0. [note 2] As a consequence, the columns of the matrix must be either 0 or generalized eigenvectors of the eigenvalue λj, since they are annihilated by . In fact, the column space is the generalized eigenspace of λj.

Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all of Cn can be chosen consisting of generalized eigenvectors. More particularly, this basis {vi}n
i=1
can be chosen and organized so that

If these basis vectors are placed as the column vectors of a matrix V = [v1v2vn], then V can be used to convert A to its Jordan normal form:

where the λi are the eigenvalues, βi = 1 if (Aλi+1)vi+1 = vi and βi = 0 otherwise.

More generally, if W is any invertible matrix, and λ is an eigenvalue of A with generalized eigenvector v, then (W−1AWλI)kWkv = 0. Thus λ is an eigenvalue of W−1AW with generalized eigenvector Wkv. That is, similar matrices have the same eigenvalues.

Normal, Hermitian, and real-symmetric matrices

The adjoint M* of a complex matrix M is the transpose of the conjugate of M: M* = MT. A square matrix A is called normal if it commutes with its adjoint: A*A = AA*. It is called Hermitian if it is equal to its adjoint: A* = A. All Hermitian matrices are normal. If A has only real elements, then the adjoint is just the transpose, and A is Hermitian if and only if it is symmetric. When applied to column vectors, the adjoint can be used to define the canonical inner product on Cn: wv = w*v. [note 3] Normal, Hermitian, and real-symmetric matrices have several useful properties:

It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. For example, a real triangular matrix has its eigenvalues along its diagonal, but in general is not symmetric.

Condition number

Any problem of numeric calculation can be viewed as the evaluation of some function f for some input x. The condition number κ(f, x) of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be very ill-conditioned. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.

For the problem of solving the linear equation Av = b where A is invertible, the matrix condition number κ(A−1, b) is given by ||A||op||A−1||op, where || ||op is the operator norm subordinate to the normal Euclidean norm on Cn. Since this number is independent of b and is the same for A and A−1, it is usually just called the condition number κ(A) of the matrix A. This value κ(A) is also the absolute value of the ratio of the largest eigenvalue of A to its smallest. If A is unitary, then ||A||op = ||A−1||op = 1, so κ(A) = 1. For general matrices, the operator norm is often difficult to calculate. For this reason, other matrix norms are commonly used to estimate the condition number.

For the eigenvalue problem, Bauer and Fike proved that if λ is an eigenvalue for a diagonalizable n × n matrix A with eigenvector matrix V, then the absolute error in calculating λ is bounded by the product of κ(V) and the absolute error in A. [2] As a result, the condition number for finding λ is κ(λ, A) = κ(V) = ||V ||op ||V−1||op. If A is normal, then V is unitary, and κ(λ, A) = 1. Thus the eigenvalue problem for all normal matrices is well-conditioned.

The condition number for the problem of finding the eigenspace of a normal matrix A corresponding to an eigenvalue λ has been shown to be inversely proportional to the minimum distance between λ and the other distinct eigenvalues of A. [3] In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.

Algorithms

The most reliable and most widely used algorithm for computing eigenvalues is John G. F. Francis' QR algorithm, considered one of the top ten algorithms of 20th century. [4]

Any monic polynomial is the characteristic polynomial of its companion matrix. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The Abel–Ruffini theorem shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are iterative, producing better approximate solutions with each iteration.

Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalue λ of a matrix A has been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer has λ as a solution.

Redirection is usually accomplished by shifting: replacing A with AμI for some constant μ. The eigenvalue found for AμI must have μ added back in to get an eigenvalue for A. For example, for power iteration, μ = λ. Power iteration finds the largest eigenvalue in absolute value, so even when λ is only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely, inverse iteration based methods find the lowest eigenvalue, so μ is chosen well away from λ and hopefully closer to some other eigenvalue.

Reduction can be accomplished by restricting A to the column space of the matrix AλI, which A carries to itself. Since A - λI is singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found.

If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with μ set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to μ. For small matrices, an alternative is to look at the column space of the product of Aλ'I for each of the other eigenvalues λ'.

A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others. [5] [6] [7] [8] [9] If A is an normal matrix with eigenvalues λi(A) and corresponding unit eigenvectors vi whose component entries are vi,j, let Aj be the matrix obtained by removing the i-th row and column from A, and let λk(Aj) be its k-th eigenvalue. Then

If are the characteristic polynomials of and , the formula can be re-written as

assuming the derivative is not zero at .

Hessenberg and tridiagonal matrices

Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method like gaussian elimination to convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. An upper Hessenberg matrix is a square matrix for which all entries below the subdiagonal are zero. A lower Hessenberg matrix is one for which all entries above the superdiagonal are zero. Matrices that are both upper and lower Hessenberg are tridiagonal. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or Hermitian, then the resulting matrix will be tridiagonal.

When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix.

MethodApplies toProducesCost without similarity matrixCost with similarity matrixDescription
Householder transformations GeneralHessenberg2n33 + O(n2) [10] :4744n33 + O(n2) [10] :474Reflect each column through a subspace to zero out its lower entries.
Givens rotations GeneralHessenberg4n33 + O(n2) [10] :470Apply planar rotations to zero out individual entries. Rotations are ordered so that later ones do not cause zero entries to become non-zero again.
Arnoldi iteration GeneralHessenbergPerform Gram–Schmidt orthogonalization on Krylov subspaces.
Lanczos algorithm HermitianTridiagonalArnoldi iteration for Hermitian matrices, with shortcuts.

For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial. [11]

Iterative algorithms

Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices.

MethodApplies toProducesCost per stepConvergenceDescription
Lanczos algorithm Hermitianm largest/smallest eigenpairs
Power iteration generaleigenpair with largest valueO(n2)linearRepeatedly applies the matrix to an arbitrary starting vector and renormalizes.
Inverse iteration generaleigenpair with value closest to μlinearPower iteration for (AμI)−1
Rayleigh quotient iteration Hermitianany eigenpaircubicPower iteration for (AμiI)−1, where μi for each iteration is the Rayleigh quotient of the previous iteration.
Preconditioned inverse iteration [12] or LOBPCG algorithm positive-definite real symmetriceigenpair with value closest to μInverse iteration using a preconditioner (an approximate inverse to A).
Bisection method real symmetric tridiagonalany eigenvaluelinearUses the bisection method to find roots of the characteristic polynomial, supported by the Sturm sequence.
Laguerre iteration real symmetric tridiagonalany eigenvaluecubic [13] Uses Laguerre's method to find roots of the characteristic polynomial, supported by the Sturm sequence.
QR algorithm Hessenbergall eigenvaluesO(n2)cubicFactors A = QR, where Q is orthogonal and R is triangular, then applies the next iteration to RQ.
all eigenpairs6n3 + O(n2)
Jacobi eigenvalue algorithm real symmetricall eigenvaluesO(n3)quadraticUses Givens rotations to attempt clearing all off-diagonal entries. This fails, but strengthens the diagonal.
Divide-and-conquer Hermitian tridiagonalall eigenvaluesO(n2)Divides the matrix into submatrices that are diagonalized then recombined.
all eigenpairs(43)n3 + O(n2)
Homotopy method real symmetric tridiagonalall eigenpairsO(n2) [14] Constructs a computable homotopy path from a diagonal eigenvalue problem.
Folded spectrum method real symmetriceigenpair with value closest to μPreconditioned inverse iteration applied to (AμI)2
MRRR algorithm [15] real symmetric tridiagonalsome or all eigenpairsO(n2)"Multiple relatively robust representations" – performs inverse iteration on a LDLT decomposition of the shifted matrix.
Gram iteration [16] generalEigenpair with largest eigenvaluesuper-linearRepeatedly computes the Gram product and rescales, deterministically.

Direct calculation

While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:

Triangular matrices

Since the determinant of a triangular matrix is the product of its diagonal entries, if T is triangular, then . Thus the eigenvalues of T are its diagonal entries.

Factorable polynomial equations

If p is any polynomial and p(A) = 0, then the eigenvalues of A also satisfy the same equation. If p happens to have a known factorization, then the eigenvalues of A lie among its roots.

For example, a projection is a square matrix P satisfying P2 = P. The roots of the corresponding scalar polynomial equation, λ2 = λ, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the nullity of P, while the multiplicity of 1 is the rank of P.

Another example is a matrix A that satisfies A2 = α2I for some scalar α. The eigenvalues must be ±α. The projection operators

satisfy

and

The column spaces of P+ and P are the eigenspaces of A corresponding to +α and α, respectively.

2×2 matrices

For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the root formulas makes this approach less attractive.

For the 2×2 matrix

the characteristic polynomial is

Thus the eigenvalues can be found by using the quadratic formula:

Defining to be the distance between the two eigenvalues, it is straightforward to calculate

with similar formulas for c and d. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated.

Eigenvectors can be found by exploiting the Cayley–Hamilton theorem. If λ1, λ2 are the eigenvalues, then (Aλ1I)(Aλ2I) = (Aλ2I)(Aλ1I) = 0, so the columns of (Aλ2I) are annihilated by (Aλ1I) and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then A is a multiple of the identity and any non-zero vector is an eigenvector.)

For example, suppose

then tr(A) = 4 − 3 = 1 and det(A) = 4(−3) − 3(−2) = −6, so the characteristic equation is

and the eigenvalues are 3 and -2. Now,

In both matrices, the columns are multiples of each other, so either column can be used. Thus, (1, −2) can be taken as an eigenvector associated with the eigenvalue -2, and (3, −1) as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by A.

Symmetric 3×3 matrices

The characteristic equation of a symmetric 3×3 matrix A is:

This equation may be solved using the methods of Cardano or Lagrange, but an affine change to A will simplify the expression considerably, and lead directly to a trigonometric solution. If A = pB + qI, then A and B have the same eigenvectors, and β is an eigenvalue of B if and only if α = + q is an eigenvalue of A. Letting and , gives

The substitution β = 2cos θ and some simplification using the identity cos 3θ = 4cos3θ − 3cos θ reduces the equation to cos 3θ = det(B) / 2. Thus

If det(B) is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of k. This issue doesn't arise when A is real and symmetric, resulting in a simple algorithm: [17]

% Given a real symmetric 3x3 matrix A, compute the eigenvalues% Note that acos and cos operate on angles in radiansp1=A(1,2)^2+A(1,3)^2+A(2,3)^2if(p1==0)% A is diagonal.eig1=A(1,1)eig2=A(2,2)eig3=A(3,3)elseq=trace(A)/3% trace(A) is the sum of all diagonal valuesp2=(A(1,1)-q)^2+(A(2,2)-q)^2+(A(3,3)-q)^2+2*p1p=sqrt(p2/6)B=(1/p)*(A-q*I)% I is the identity matrixr=det(B)/2% In exact arithmetic for a symmetric matrix  -1 <= r <= 1% but computation error can leave it slightly outside this range.if(r<=-1)phi=pi/3elseif(r>=1)phi=0elsephi=acos(r)/3end% the eigenvalues satisfy eig3 <= eig2 <= eig1eig1=q+2*p*cos(phi)eig3=q+2*p*cos(phi+(2*pi/3))eig2=3*q-eig1-eig3% since trace(A) = eig1 + eig2 + eig3end

Once again, the eigenvectors of A can be obtained by recourse to the Cayley–Hamilton theorem. If α1, α2, α3 are distinct eigenvalues of A, then (Aα1I)(Aα2I)(Aα3I) = 0. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if α3 = α1, then (Aα1I)2(Aα2I) = 0 and (Aα2I)(Aα1I)2 = 0. Thus the generalized eigenspace of α1 is spanned by the columns of Aα2I while the ordinary eigenspace is spanned by the columns of (Aα1I)(Aα2I). The ordinary eigenspace of α2 is spanned by the columns of (Aα1I)2.

For example, let

The characteristic equation is

with eigenvalues 1 (of multiplicity 2) and -1. Calculating,

and

Thus (−4, −4, 4) is an eigenvector for −1, and (4, 2, −2) is an eigenvector for 1. (2, 3, −1) and (6, 5, −3) are both generalized eigenvectors associated with 1, either one of which could be combined with (−4, −4, 4) and (4, 2, −2) to form a basis of generalized eigenvectors of A. Once found, the eigenvectors can be normalized if needed.

Eigenvectors of normal 3×3 matrices

If a 3×3 matrix is normal, then the cross-product can be used to find eigenvectors. If is an eigenvalue of , then the null space of is perpendicular to its column space. The cross product of two independent columns of will be in the null space. That is, it will be an eigenvector associated with . Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it.

If does not contain two independent columns but is not 0, the cross-product can still be used. In this case is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose is a non-zero column of . Choose an arbitrary vector not parallel to . Then and will be perpendicular to and thus will be eigenvectors of .

This does not work when is not normal, as the null space and column space do not need to be perpendicular for such matrices.

See also

Notes

  1. The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".
  2. where the constant term is multiplied by the identity matrix I.
  3. This ordering of the inner product (with the conjugate-linear position on the left), is preferred by physicists. Algebraists often place the conjugate-linear position on the right: wv = v*w.

Related Research Articles

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In linear algebra, the Frobenius companion matrix of the monic polynomial

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.

In matrix theory, the Frobenius covariants of a square matrix A are special polynomials of it, namely projection matrices Ai associated with the eigenvalues and eigenvectors of A. They are named after the mathematician Ferdinand Frobenius.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues , left eigenvectors and right eigenvectors such that

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

In linear algebra, two matrices and are said to commute if , or equivalently if their commutator is zero. A set of matrices is said to commute if they commute pairwise, meaning that every pair of matrices in the set commute with each other.

References

  1. Axler, Sheldon (1995), "Down with Determinants!" (PDF), American Mathematical Monthly, 102 (2): 139–154, doi:10.2307/2975348, JSTOR   2975348, archived from the original (PDF) on 2012-09-13, retrieved 2012-07-31
  2. F. L. Bauer; C. T. Fike (1960), "Norms and exclusion theorems", Numer. Math., 2: 137–141, doi:10.1007/bf01386217, S2CID   121278235
  3. S.C. Eisenstat; I.C.F. Ipsen (1998), "Relative Perturbation Results for Eigenvalues and Eigenvectors of Diagonalisable Matrices", BIT, 38 (3): 502–9, doi:10.1007/bf02510256, S2CID   119886389
  4. J. Dongarra and F. Sullivan (2000). "Top ten algorithms of the century". Computing in Science and Engineering. 2: 22-23. doi:10.1109/MCISE.2000.814652.
  5. Thompson, R. C. (June 1966). "Principal submatrices of normal and Hermitian matrices". Illinois Journal of Mathematics. 10 (2): 296–308. doi: 10.1215/ijm/1256055111 .
  6. Peter Nylen; Tin-Yau Tam; Frank Uhlig (1993). "On the eigenvalues of principal submatrices of normal, hermitian and symmetric matrices". Linear and Multilinear Algebra. 36 (1): 69–78. doi:10.1080/03081089308818276.
  7. Bebiano N, Furtado S, da Providência J (2011). "On the eigenvalues of principal submatrices of J-normal matrices". Linear Algebra and Its Applications. 435 (12): 3101–3114. doi: 10.1016/j.laa.2011.05.033 .
  8. Forrester PJ, Zhang J (2021). "Corank-1 projections and the randomised Horn problem". Tunisian Journal of Mathematics. 3: 55–73. arXiv: 1905.05314 . doi:10.2140/tunis.2021.3.55. S2CID   153312446.
  9. Denton PB, Parke SJ, Tao T, Zhang X (2021). "Eigenvectors from eigenvalues: A survey of a basic identity in linear algebra". Bulletin of the American Mathematical Society. 59: 1. arXiv: 1908.03795 . doi:10.1090/bull/1722. S2CID   213918682.
  10. 1 2 3 Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1992). Numerical Recipes in C (2nd ed.). Cambridge University Press. ISBN   978-0-521-43108-8.
  11. Coakley, Ed S. (May 2013), "A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices.", Applied and Computational Harmonic Analysis , 34 (3): 379–414, doi:10.1016/j.acha.2012.06.003
  12. Neymeyr, K. (2006), "A geometric theory for preconditioned inverse iteration IV: On the fastest convergence cases.", Linear Algebra Appl., 415 (1): 114–139, doi: 10.1016/j.laa.2005.06.022
  13. Li, T. Y.; Zeng, Zhonggang (1992), "Laguerre's Iteration In Solving The Symmetric Tridiagonal Eigenproblem - Revisited", SIAM Journal on Scientific Computing
  14. Chu, Moody T. (1988), "A Note on the Homotopy Method for Linear Algebraic Eigenvalue Problems", Linear Algebra Appl., 105: 225–236, doi: 10.1016/0024-3795(88)90015-8
  15. Dhillon, Inderjit S.; Parlett, Beresford N.; Vömel, Christof (2006), "The Design and Implementation of the MRRR Algorithm" (PDF), ACM Transactions on Mathematical Software , 32 (4): 533–560, doi:10.1145/1186785.1186788, S2CID   2410736
  16. Delattre, B.; Barthélemy, Q.; Araujo, A.; Allauzen, A. (2023), "Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration", Proceedings of the 40th International Conference on Machine Learning
  17. Smith, Oliver K. (April 1961), "Eigenvalues of a symmetric 3 × 3 matrix.", Communications of the ACM , 4 (4): 168, doi: 10.1145/355578.366316 , S2CID   37815415

Further reading