Bistritz stability criterion

Last updated

In signal processing and control theory, the Bistritz criterion is a simple method to determine whether a discrete linear time invariant (LTI) system is stable proposed by Yuval Bistritz. [1] [2] Stability of a discrete LTI system requires that its characteristic polynomial

Contents

(obtained from its difference equation, its dynamic matrix, or appearing as the denominator of its transfer function) is a stable polynomial, where is said to be stable if all its roots (zeros) are inside the unit circle, viz.

,

where . The test determines whether is stable algebraically (i.e. without numerical determination of the zeros). The method also solves the full zero location (ZL) problem. Namely, it can count the number of inside the unit-circle (IUC) zeros (), on the unit-circle zeros (UC) zeros () and outside the unit-circle (OUC) zeros () for any real or complex polynomial. [1] [2] The Bistritz test is the discrete equivalent of Routh criterion used to test stability of continuous LTI systems. This title was introduced soon after its presentation. [3] It has been also recognized to be more efficient than previously available stability tests for discrete systems like the Schur–Cohn and the Jury test. [4]

In the following, the focus is only on how to test stability of a real polynomial. However, as long as the basic recursion needed to test stability remains valid, ZL rules are also brought.

Algorithm

Consider as above and assume . (If the polynomial is not stable.) Consider its reciprocal polynomial

.

The algorithm assigns to a sequence of symmetric polynomials

created by a three-term polynomial recursion. Write out the polynomials by their coefficients,

,

symmetry means that

,

so that it is enough to calculate for each polynomial only about half of the coefficients. The recursion begins with two initial polynomials driven from the sum and difference of the tested polynomial and its reciprocal, then each subsequent polynomial of reduced degree is produced from the last two known polynomials.

Initiation:

Recursion: For do:

Stability condition

The successful completion of the sequence with the above recursion requires . The expansion of these conditions into are called normal conditions.

Normal conditions are necessary for stability. This means that, the tested polynomial can be declared as not stable as soon as a is observed. It also follows that the above recursion is broad enough for testing stability because the polynomial can be declared as not stable before a division by zero is encountered.

Theorem. If the sequence is not normal then is not stable. If normal conditions hold then the complete sequence of symmetric polynomials is well defined. Let

denote the count of the number of sign variations in the indicated sequence. Then is stable if and only if . More generally, if normal conditions hold then has no UC zeros, OUC zeros and IUC zeros.

Violation of various necessary conditions for stability may be used advantageously as early indications that the polynomial is not stable (has at least one UC or OUC zero). The polynomial can be declared not stable as soon as a , or a , or a change of sign in the sequence of s is observed.

Example

Consider the polynomial , where is a real parameter.

Q1: For what values of the polynomial is stable?

Construct the sequence:

Use their values at z = 1 to form

All the entries in the sequence are positive for −4 < K < 22 (and for no K are they all negative). Therefore D(z) is stable for −4 < K < 22.

Q2: Find ZL for K = 33 Var { 71, 11, −48, 11 } = 2 ⇒ 2 OUC, 1 IUC zeros.

Q3: Find ZL for K = −11 Var{ −14, 55, 144, 33 } = 1 ⇒ 1 OUC, 2 IUC zeros.

Comments

(1) The test bears a remarkable similarity to the Routh test. This is best observed when the Routh test is arranged appropriately into a corresponding three-term polynomial recursion.

(2) The Bistritz test uses three-term polynomial recursion that propagates polynomials with symmetry as opposed to previously available classical tests for discrete systems that propagate polynomials with no particular structure using a two-term recursion. It stimulated the discovery of more algorithms in the area of digital signal processing (e.g. solving the linear prediction problem) and discrete systems (e.g. testing stability of higher-dimensional systems) collectively called "immittance" or "split" algorithms that adopted this technique to more efficient counterparts to also other classical so called "scattering" algorithms. [5] [6] [7] The Bistritz test forms the "immittance" counterpart of the "scattering" type classical tests of Schur–Cohn and Jury.

Related Research Articles

<span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

<span class="mw-page-title-main">B-spline</span> Spline function

In the mathematical subfield of numerical analysis, a B-spline or basis spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. Any spline function of given degree can be expressed as a linear combination of B-splines of that degree. Cardinal B-splines have knots that are equidistant from each other. B-splines can be used for curve-fitting and numerical differentiation of experimental data.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision.

<span class="mw-page-title-main">Root locus analysis</span> Stability criterion in control theory

In control theory and stability theory, root locus analysis is a graphical method for examining how the roots of a system change with variation of a certain system parameter, commonly a gain within a feedback system. This is a technique used as a stability criterion in the field of classical control theory developed by Walter R. Evans which can determine stability of the system. The root locus plots the poles of the closed loop transfer function in the complex s-plane as a function of a gain parameter.

In the mathematical subfield of numerical analysis, de Boor's algorithm is a polynomial-time and numerically stable algorithm for evaluating spline curves in B-spline form. It is a generalization of de Casteljau's algorithm for Bézier curves. The algorithm was devised by German-American mathematician Carl R. de Boor. Simplified, potentially faster variants of the de Boor algorithm have been created but they suffer from comparatively lower stability.

The Berlekamp–Massey algorithm is an algorithm that will find the shortest linear-feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find the minimal polynomial of a linearly recurrent sequence in an arbitrary field. The field requirement means that the Berlekamp–Massey algorithm requires all non-zero elements to have a multiplicative inverse. Reeds and Sloane offer an extension to handle a ring.

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.

In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.

In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In mathematics, the Lehmer–Schur algorithm is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex plane. It uses the Schur-Cohn test to test increasingly smaller disks for the presence or absence of roots.

In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem of algebra in terms of computational complexity. A revised algorithm was presented by Victor Pan in 1998. An implementation was provided by Xavier Gourdon in 1996 for the Magma and PARI/GP computer algebra systems.

<span class="mw-page-title-main">Tutte polynomial</span> Algebraic encoding of graph connectivity

The Tutte polynomial, also called the dichromate or the Tutte–Whitney polynomial, is a graph polynomial. It is a polynomial in two variables which plays an important role in graph theory. It is defined for every undirected graph and contains information about how the graph is connected. It is denoted by .

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In functional analysis, compactly supported wavelets derived from Legendre polynomials are termed Legendre wavelets or spherical harmonic wavelets. Legendre functions have widespread applications in which spherical coordinate system is appropriate. As with many wavelets there is no nice analytical formula for describing these harmonic spherical wavelets. The low-pass filter associated to Legendre multiresolution analysis is a finite impulse response (FIR) filter.

<span class="mw-page-title-main">Constant-recursive sequence</span> Infinite sequence of numbers satisfying a linear equation

In mathematics and theoretical computer science, a constant-recursive sequence is an infinite sequence of numbers in which each number in the sequence is equal to a fixed linear combination of one or more of its immediate predecessors. The concept is variously known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, a C-finite sequence, or a solution to a linear recurrence with constant coefficients.

References

  1. 1 2 Y. Bistritz (1984) Zero location with respect to the unit circle of discrete-time linear system polynomials, Proc. IEEE, 72 (9): 1131–1142.
  2. 1 2 Y. Bistritz (2002) Zero location of polynomials with respect to the unit circle unhampered by nonessential singularities, IEEE Trans. CAS I, 49(3): 305–314.
  3. E. I. Jury and M. Mansour (1985), On the terminology relationship between continuous and discrete systems criteria, Proc. IEEE, 73(4):884.
  4. K. Premaratne, and E. I. Jury (1993) On the Bistritz tabular form and its relationship with the Schur–Cohn minors and inner determinants, Journal of the Franklin Institute, 30(1):165-182.
  5. P. Delsarte and E. Genin (1986) The split Levinson algorithm IEEE Trans. ASSP 34(3):470-478.
  6. Y. Bistritz, H. Lev-Ari and T. Kailath (1989) Immittance-domain Levinson algorithms IEEE Trans. IT, 35(3):675-682.
  7. Orfanidis, S. J. (1988). Optimum signal processing: An introduction (PDF) (2nd ed.). Macmillan.