Hautus lemma

Last updated

In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma (after Malo L. J. Hautus), also commonly known as the Popov-Belevitch-Hautus test or PBH test, [1] [2] can prove to be a powerful tool.

Contents

A special case of this result appeared first in 1963 in a paper by Elmer G. Gilbert, [1] and was later expanded to the current PHB test with contributions by Vasile M. Popov in 1966, [3] [4] Vitold Belevitch in 1968, [5] and Malo Hautus in 1969, [5] who emphasized its applicability in proving results for linear time-invariant systems.

Statement

There exist multiple forms of the lemma:

Hautus Lemma for controllability

The Hautus lemma for controllability says that given a square matrix and a the following are equivalent:

  1. The pair is controllable
  2. For all it holds that
  3. For all that are eigenvalues of it holds that

Hautus Lemma for stabilizability

The Hautus lemma for stabilizability says that given a square matrix and a the following are equivalent:

  1. The pair is stabilizable
  2. For all that are eigenvalues of and for which it holds that

Hautus Lemma for observability

The Hautus lemma for observability says that given a square matrix and a the following are equivalent:

  1. The pair is observable.
  2. For all it holds that
  3. For all that are eigenvalues of it holds that

Hautus Lemma for detectability

The Hautus lemma for detectability says that given a square matrix and a the following are equivalent:

  1. The pair is detectable
  2. For all that are eigenvalues of and for which it holds that

Related Research Articles

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.

References

Notes

  1. 1 2 Hespanha, Joao (2018). Linear Systems Theory (Second ed.). Princeton University Press. ISBN   9780691179575.
  2. Bernstein, Dennis S. (2018). Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas (Revised and expanded ed.). Princeton University Press. ISBN   9780691151205.
  3. Popov, Vasile Mihai (1966). Hiperstabilitatea sistemelor automate[Hyperstability of Control Systems]. Editura Academiei Republicii Socialiste România.
  4. Popov, V.M. (1973). Hyperstability of Control Systems. Berlin: Springer-Verlag.
  5. 1 2 Belevitch, V. (1968). Classical Network Theory. San Francisco: Holden–Day.