Bendixson's inequality

Last updated

In mathematics, Bendixson's inequality is a quantitative result in the field of matrices derived by Ivar Bendixson in 1902. [1] [2] The inequality puts limits on the imaginary and real parts of characteristic roots (eigenvalues) of real matrices. [3] A special case of this inequality leads to the result that characteristic roots of a real symmetric matrix are always real.

The inequality relating to the imaginary parts of characteristic roots of real matrices (Theorem I in [1] ) is stated as:

Let be a real matrix and . If is any characteristic root of , then

[4]

If is symmetric then and consequently the inequality implies that must be real.

The inequality relating to the real parts of characteristic roots of real matrices (Theorem II in [1] ) is stated as:

Let and be the smallest and largest characteristic roots of , then

.

See also

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.

The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics. They span the Lie algebra of the SU(3) group in the defining representation.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

<span class="mw-page-title-main">Semisimple Lie algebra</span> Direct sum of simple Lie algebras

In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras..

In mathematics, we can define norms for the elements of a vector space. When the vector space in question consists of matrices, these are called matrix norms.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, the Schur orthogonality relations, which were proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3).

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. Let

References

  1. 1 2 3 Bendixson, Ivar (1902). "Sur les racines d'une équation fondamentale". Acta Mathematica. 25: 359–365. doi: 10.1007/bf02419030 . ISSN   0001-5962. S2CID   121330188.
  2. Mirsky, L. (3 December 2012). An Introduction to Linear Algebra. Courier Corporation. p. 210. ISBN   9780486166445 . Retrieved 14 October 2018.
  3. Farnell, A. B. (1944). "Limits for the characteristic roots of a matrix". Bulletin of the American Mathematical Society. 50 (10): 789–794. doi: 10.1090/s0002-9904-1944-08239-6 . ISSN   0273-0979.
  4. Axelsson, Owe (29 March 1996). Iterative Solution Methods. Cambridge University Press. p. 633. ISBN   9780521555692 . Retrieved 14 October 2018.