Minimum phase

Last updated

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable. [1] [2]

Contents

The most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum-phase and a general transfer function is that a minimum-phase system has all of the poles and zeros of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z plane). Since inverting a system function leads to poles turning to zeros and conversely, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum-phase systems is closed under inversion. Intuitively, the minimum-phase part of a general causal system implements its amplitude response with minimal group delay, while its all-pass part corrects its phase response alone to correspond with the original system function.

The analysis in terms of poles and zeros is exact only in the case of transfer functions which can be expressed as ratios of polynomials. In the continuous-time case, such systems translate into networks of conventional, idealized LCR networks. In discrete time, they conveniently translate into approximations thereof, using addition, multiplication, and unit delay. It can be shown that in both cases, system functions of rational form with increasing order can be used to efficiently approximate any other system function; thus even system functions lacking a rational form, and so possessing an infinitude of poles and/or zeros, can in practice be implemented as efficiently as any other.

In the context of causal, stable systems, we would in theory be free to choose whether the zeros of the system function are outside of the stable range (to the right or outside) if the closure condition wasn't an issue. However, inversion is of great practical importance, just as theoretically perfect factorizations are in their own right. (Cf. the spectral symmetric/antisymmetric decomposition as another important example, leading e.g. to Hilbert transform techniques.) Many physical systems also naturally tend towards minimum-phase response, and sometimes have to be inverted using other physical systems obeying the same constraint.

Insight is given below as to why this system is called minimum-phase, and why the basic idea applies even when the system function cannot be cast into a rational form that could be implemented.

Inverse system

A system is invertible if we can uniquely determine its input from its output. I.e., we can find a system such that if we apply followed by , we obtain the identity system . (See Inverse matrix for a finite-dimensional analog). That is,

Suppose that is input to system and gives output :

Applying the inverse system to gives

So we see that the inverse system allows us to determine uniquely the input from the output .

Discrete-time example

Suppose that the system is a discrete-time, linear, time-invariant (LTI) system described by the impulse response for n in Z. Additionally, suppose has impulse response . The cascade of two LTI systems is a convolution. In this case, the above relation is the following:

where is the Kronecker delta, or the identity system in the discrete-time case. (Changing the order of and is allowed because of commutativity of the convolution operation.) Note that this inverse system need not be unique.

Minimum-phase system

When we impose the constraints of causality and stability, the inverse system is unique; and the system and its inverse are called minimum-phase. The causality and stability constraints in the discrete-time case are the following (for time-invariant systems where h is the system's impulse response, and is the 1 norm):

Causality

and

Stability

and

See the article on stability for the analogous conditions for the continuous-time case.

Frequency analysis

Discrete-time frequency analysis

Performing frequency analysis for the discrete-time case will provide some insight. The time-domain equation is

Applying the Z-transform gives the following relation in the z domain:

From this relation, we realize that

For simplicity, we consider only the case of a rational transfer function H(z). Causality and stability imply that all poles of H(z) must be strictly inside the unit circle (see stability). Suppose

where A(z) and D(z) are polynomial in z. Causality and stability imply that the poles   the roots of D(z)  must be strictly inside the unit circle. We also know that

so causality and stability for imply that its poles   the roots of A(z)  must be inside the unit circle. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the unit circle.

Continuous-time frequency analysis

Analysis for the continuous-time case proceeds in a similar manner, except that we use the Laplace transform for frequency analysis. The time-domain equation is

where is the Dirac delta function   the identity operator in the continuous-time case because of the sifting property with any signal x(t):

Applying the Laplace transform gives the following relation in the s-plane:

from which we realize that

Again, for simplicity, we consider only the case of a rational transfer function H(s). Causality and stability imply that all poles of H(s) must be strictly inside the left-half s-plane (see stability). Suppose

where A(s) and D(s) are polynomial in s. Causality and stability imply that the poles   the roots of D(s)  must be inside the left-half s-plane. We also know that

so causality and stability for imply that its poles   the roots of A(s)  must be strictly inside the left-half s-plane. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the left-half s-plane.

Relationship of magnitude response to phase response

A minimum-phase system, whether discrete-time or continuous-time, has an additional useful property that the natural logarithm of the magnitude of the frequency response (the "gain" measured in nepers, which is proportional to dB) is related to the phase angle of the frequency response (measured in radians) by the Hilbert transform. That is, in the continuous-time case, let

be the complex frequency response of system H(s). Then, only for a minimum-phase system, the phase response of H(s) is related to the gain by

where denotes the Hilbert transform, and, inversely,

Stated more compactly, let

where and are real functions of a real variable. Then

and

The Hilbert transform operator is defined to be

An equivalent corresponding relationship is also true for discrete-time minimum-phase systems.

Minimum phase in the time domain

For all causal and stable systems that have the same magnitude response, the minimum-phase system has its energy concentrated near the start of the impulse response. i.e., it minimizes the following function, which we can think of as the delay of energy in the impulse response:

Minimum phase as minimum group delay

For all causal and stable systems that have the same magnitude response, the minimum phase system has the minimum group delay. The following proof illustrates this idea of minimum group delay.

Suppose we consider one zero of the transfer function . Let's place this zero inside the unit circle () and see how the group delay is affected.

Since the zero contributes the factor to the transfer function, the phase contributed by this term is the following.

contributes the following to the group delay.

The denominator and are invariant to reflecting the zero outside of the unit circle, i.e., replacing with . However, by reflecting outside of the unit circle, we increase the magnitude of in the numerator. Thus, having inside the unit circle minimizes the group delay contributed by the factor . We can extend this result to the general case of more than one zero since the phase of the multiplicative factors of the form is additive. I.e., for a transfer function with zeros,

So, a minimum phase system with all zeros inside the unit circle minimizes the group delay since the group delay of each individual zero is minimized.

Illustration of the calculus above. Top and bottom are filters with same gain response (on the left : the Nyquist diagrams, on the right : phase responses), but the filter on the top with
a
=
0.8
<
1
{\displaystyle a=0.8<1}
has the smallest amplitude in phase response. Minimum and maximum phase responses.gif
Illustration of the calculus above. Top and bottom are filters with same gain response (on the left : the Nyquist diagrams, on the right : phase responses), but the filter on the top with has the smallest amplitude in phase response.

Non-minimum phase

Systems that are causal and stable whose inverses are causal and unstable are known as non-minimum-phase systems. A given non-minimum phase system will have a greater phase contribution than the minimum-phase system with the equivalent magnitude response.

Maximum phase

A maximum-phase system is the opposite of a minimum phase system. A causal and stable LTI system is a maximum-phase system if its inverse is causal and unstable.[ dubious ] That is,

Such a system is called a maximum-phase system because it has the maximum group delay of the set of systems that have the same magnitude response. In this set of equal-magnitude-response systems, the maximum phase system will have maximum energy delay.

For example, the two continuous-time LTI systems described by the transfer functions

have equivalent magnitude responses; however, the second system has a much larger contribution to the phase shift. Hence, in this set, the second system is the maximum-phase system and the first system is the minimum-phase system. These systems are also famously known as nonminimum-phase systems that raise many stability concerns in control. One recent solution to these systems is moving the RHP zeros to the LHP using the PFCD method. [3]

Mixed phase

A mixed-phase system has some of its zeros inside the unit circle and has others outside the unit circle. Thus, its group delay is neither minimum or maximum but somewhere between the group delay of the minimum and maximum phase equivalent system.

For example, the continuous-time LTI system described by transfer function

is stable and causal; however, it has zeros on both the left- and right-hand sides of the complex plane. Hence, it is a mixed-phase system. To control the transfer functions that include these systems some methods such as internal model controller (IMC), [4] generalized Smith's predictor (GSP) [5] and parallel feedforward control with derivative (PFCD) [6] are proposed.

Linear phase

A linear-phase system has constant group delay. Non-trivial linear phase or nearly linear phase systems are also mixed phase.

See also

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

<span class="mw-page-title-main">Electrical impedance</span> Opposition of a circuit to a current when a voltage is applied

In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.

In fluid dynamics, Stokes' law is an empirical law for the frictional force – also called drag force – exerted on spherical objects with very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.

Chebyshev filters are analog or digital filters that have a steeper roll-off than Butterworth filters, and have either passband ripple or stopband ripple. Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the operating frequency range of the filter, but they achieve this with ripples in the passband. This type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Type I Chebyshev filters are usually referred to as "Chebyshev filters", while type II filters are usually called "inverse Chebyshev filters". Because of the passband ripple inherent in Chebyshev filters, filters with a smoother response in the passband but a more irregular response in the stopband are preferred for certain applications.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

In physics, a wave vector is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave, and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.

In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform.

<span class="mw-page-title-main">Phasor</span> Complex number representing a particular sine wave

In physics and engineering, a phasor is a complex number representing a sinusoidal function whose amplitude, and initial phase are time-invariant and whose angular frequency is fixed. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as a phasor, or complex amplitude, and sinor or even complexor.

In signal processing, linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time by the same constant amount, which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another.

In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.

<span class="mw-page-title-main">Instantaneous phase and frequency</span> Electrical engineering concept

Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function:

<span class="mw-page-title-main">Lemniscate elliptic functions</span> Mathematical functions

In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.

In mathematics, a π-system on a set is a collection of certain subsets of such that

A vacuum Rabi oscillation is a damped oscillation of an initially excited atom coupled to an electromagnetic resonator or cavity in which the atom alternately emits photon(s) into a single-mode electromagnetic cavity and reabsorbs them. The atom interacts with a single-mode field confined to a limited volume V in an optical cavity. Spontaneous emission is a consequence of coupling between the atom and the vacuum fluctuations of the cavity field.

Amplitude amplification is a technique in quantum computing which generalizes the idea behind Grover's search algorithm, and gives rise to a family of quantum algorithms. It was discovered by Gilles Brassard and Peter Høyer in 1997, and independently rediscovered by Lov Grover in 1998.

In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.

<span class="mw-page-title-main">Phase stretch transform</span>

Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property. The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase-contrast microscopy, but on digital images. PST can be applied to digital images and temporal data. It is a physics-based feature engineering algorithm.

References

  1. Hassibi, Babak; Kailath, Thomas; Sayed, Ali H. (2000). Linear estimation. Englewood Cliffs, N.J: Prentice Hall. p. 193. ISBN   0-13-022464-2.
  2. J. O. Smith III, Introduction to Digital Filters with Audio Applications (September 2007 edition).
  3. Noury, K. (2019). "Analytical Statistical Study of Linear Parallel Feedforward Compensators for Nonminimum-Phase Systems". Analytical Statistical Study of Linear Parallel Feedforward Compensators for Nonminimum Phase Systems. doi:10.1115/DSCC2019-9126. ISBN   978-0-7918-5914-8. S2CID   214446227.
  4. Morari, Manfred (2002). Robust process control. PTR Prentice Hall. ISBN   0137821530. OCLC   263718708.
  5. Ramanathan, S.; Curl, R. L.; Kravaris, C. (1989). "Dynamics and control of quasirational systems". AIChE Journal. 35 (6): 1017–1028. doi:10.1002/aic.690350615. hdl: 2027.42/37408 . ISSN   1547-5905. S2CID   20116797.
  6. Noury, K. (2019). "Class of Stabilizing Parallel Feedforward Compensators for Nonminimum-Phase Systems". Class of Stabilizing Parallel Feedforward Compensators for Nonminimum Phase Systems. doi:10.1115/DSCC2019-9240. ISBN   978-0-7918-5914-8. S2CID   214440404.

Further reading