Stable polynomial

Last updated

In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:

Contents

The first condition provides stability for continuous-time linear systems, and the second case relates to stability of discrete-time linear systems. A polynomial with the first property is called at times a Hurwitz polynomial and with the second property a Schur polynomial. Stable polynomials arise in control theory and in mathematical theory of differential and difference equations. A linear, time-invariant system (see LTI system theory) is said to be BIBO stable if every bounded input produces bounded output. A linear system is BIBO stable if its characteristic polynomial is stable. The denominator is required to be Hurwitz stable if the system is in continuous-time and Schur stable if it is in discrete-time. In practice, stability is determined by applying any one of several stability criteria.

Properties

obtained after the Möbius transformation which maps the left half-plane to the open unit disc: P is Schur stable if and only if Q is Hurwitz stable and . For higher degree polynomials the extra computation involved in this mapping can be avoided by testing the Schur stability by the Schur-Cohn test, the Jury test or the Bistritz test.
is Schur stable.

Examples

Note here that
It is a "boundary case" for Schur stability because its roots lie on the unit circle. The example also shows that the necessary (positivity) conditions stated above for Hurwitz stability are not sufficient.

Stable matrices

Just as stable polynomials are crucial for assessing the stability of systems described by polynomials, stability matrices play a vital role in evaluating the stability of systems represented by matrices.

Hurwitz matrix

A square matrix A is called a Hurwitz matrix if every eigenvalue of A has strictly negative real part.

Schur matrix

Schur matrices is an analogue of the Hurwitz matrices for discrete-time systems. A matrix A is a Schur (stable) matrix if its eigenvalues are located in the open unit disk in the complex plane.

See also

Related Research Articles

<span class="mw-page-title-main">Complex number</span> Number with a real and an imaginary part

In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number , a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols or C. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world.

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

In mathematics, a Hurwitz polynomial, named after Adolf Hurwitz, is a polynomial whose roots (zeros) are located in the left half-plane of the complex plane or on the imaginary axis, that is, the real part of every root is zero or negative. Such a polynomial must have coefficients that are positive real numbers. The term is sometimes restricted to polynomials whose roots have real parts that are strictly negative, excluding the imaginary axis.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.

In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.

In mathematics, the Routh–Hurwitz theorem gives a test to determine whether all roots of a given polynomial lie in the left half-plane. Polynomials with this property are called Hurwitz stable polynomials. The Routh–Hurwitz theorem is important in dynamical systems and control theory, because the characteristic polynomial of the differential equations of a stable linear system has roots limited to the left half plane. Thus the theorem provides a mathematical test, the Routh-Hurwitz stability criterion, to determine whether a linear dynamical system is stable without solving the system. The Routh–Hurwitz theorem was proved in 1895, and it was named after Edward John Routh and Adolf Hurwitz.

In mathematics, a Bézout matrix is a special square matrix associated with two polynomials, introduced by James Joseph Sylvester (1853) and Arthur Cayley (1857) and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial.

<span class="mw-page-title-main">Stability theory</span> Part of mathematics that addresses the stability of solutions

In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

In signal processing and control theory, the Jury stability criterion is a method of determining the stability of a linear discrete time system by analysis of the coefficients of its characteristic polynomial. It is the discrete time analogue of the Routh–Hurwitz stability criterion. The Jury stability criterion requires that the system poles are located inside the unit circle centered at the origin, while the Routh-Hurwitz stability criterion requires that the poles are in the left half of the complex plane. The Jury criterion is named after Eliahu Ibraham Jury.

In signal processing and control theory, the Bistritz criterion is a simple method to determine whether a discrete linear time invariant (LTI) system is stable proposed by Yuval Bistritz. Stability of a discrete LTI system requires that its characteristic polynomial

In control system theory, the Liénard–Chipart criterion is a stability criterion modified from the Routh–Hurwitz stability criterion, proposed by A. Liénard and M. H. Chipart. This criterion has a computational advantage over the Routh–Hurwitz criterion because it involves only about half the number of determinant computations.

In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.

Finding polynomial roots is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century, algebra meant essentially theory of polynomial equations.

References

  1. Garloff, Jürgen; Wagner, David G. (1996). "Hadamard Products of Stable Polynomials Are Stable". Journal of Mathematical Analysis and Applications. 202 (3): 797–809. doi: 10.1006/jmaa.1996.0348 .