Stable polynomial

Last updated

In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:

Contents

The first condition provides stability for continuous-time linear systems, and the second case relates to stability of discrete-time linear systems. A polynomial with the first property is called at times a Hurwitz polynomial and with the second property a Schur polynomial. Stable polynomials arise in control theory and in mathematical theory of differential and difference equations. A linear, time-invariant system (see LTI system theory) is said to be BIBO stable if every bounded input produces bounded output. A linear system is BIBO stable if its characteristic polynomial is stable. The denominator is required to be Hurwitz stable if the system is in continuous-time and Schur stable if it is in discrete-time. In practice, stability is determined by applying any one of several stability criteria.

Properties

obtained after the Möbius transformation which maps the left half-plane to the open unit disc: P is Schur stable if and only if Q is Hurwitz stable and . For higher degree polynomials the extra computation involved in this mapping can be avoided by testing the Schur stability by the Schur-Cohn test, the Jury test or the Bistritz test.
is Schur stable.

Examples

Note here that
It is a "boundary case" for Schur stability because its roots lie on the unit circle. The example also shows that the necessary (positivity) conditions stated above for Hurwitz stability are not sufficient.

Stable matrices

Just as stable polynomials are crucial for assessing the stability of systems described by polynomials, stability matrices play a vital role in evaluating the stability of systems represented by matrices.

Hurwitz matrix

A square matrix A is called a Hurwitz matrix if every eigenvalue of A has strictly negative real part.

Schur matrix

Schur matrices is an analogue of the Hurwitz matrices for discrete-time systems. A matrix A is a Schur (stable) matrix if its eigenvalues are located in the open unit disk in the complex plane.

See also

Related Research Articles

<span class="mw-page-title-main">Complex number</span> Number with a real and an imaginary part

In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number ,a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols or C. Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world.

<span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

<span class="mw-page-title-main">Imaginary unit</span> Principal square root of −1

The imaginary unit or unit imaginary number is a solution to the quadratic equation x2 + 1 = 0. Although there is no real number with this property, i can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of i in a complex number is 2 + 3i.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.

In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.

In mathematics, the Routh–Hurwitz theorem gives a test to determine whether all roots of a given polynomial lie in the left half-plane. Polynomials with this property are called Hurwitz stable polynomials. The Routh–Hurwitz theorem is important in dynamical systems and control theory, because the characteristic polynomial of the differential equations of a stable linear system has roots limited to the left half plane. Thus the theorem provides a mathematical test, the Routh–Hurwitz stability criterion, to determine whether a linear dynamical system is stable without solving the system. The Routh–Hurwitz theorem was proved in 1895, and it was named after Edward John Routh and Adolf Hurwitz.

In mathematics, a Bézout matrix is a special square matrix associated with two polynomials, introduced by James Joseph Sylvester in 1853 and Arthur Cayley in 1857 and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial.

<span class="mw-page-title-main">Stability theory</span> Part of mathematics that addresses the stability of solutions

In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

In signal processing and control theory, the Jury stability criterion is a method of determining the stability of a linear discrete time system by analysis of the coefficients of its characteristic polynomial. It is the discrete time analogue of the Routh–Hurwitz stability criterion. The Jury stability criterion requires that the system poles are located inside the unit circle centered at the origin, while the Routh-Hurwitz stability criterion requires that the poles are in the left half of the complex plane. The Jury criterion is named after Eliahu Ibraham Jury.

<span class="mw-page-title-main">Hadamard product (matrices)</span> Elementwise product of two matrices

In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

In signal processing and control theory, the Bistritz criterion is a simple method to determine whether a discrete, linear, time-invariant (LTI) system is stable proposed by Yuval Bistritz. Stability of a discrete LTI system requires that its characteristic polynomial

In control system theory, the Liénard–Chipart criterion is a stability criterion modified from the Routh–Hurwitz stability criterion, proposed by A. Liénard and M. H. Chipart. This criterion has a computational advantage over the Routh–Hurwitz criterion because it involves only about half the number of determinant computations.

In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.

References

  1. Garloff, Jürgen; Wagner, David G. (1996). "Hadamard Products of Stable Polynomials Are Stable". Journal of Mathematical Analysis and Applications. 202 (3): 797–809. doi: 10.1006/jmaa.1996.0348 .