In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable. [1] [2]
The most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum-phase and a general transfer function is that a minimum-phase system has all of the poles and zeros of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z plane). Since inverting a system function leads to poles turning to zeros and conversely, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum-phase systems is closed under inversion. Intuitively, the minimum-phase part of a general causal system implements its amplitude response with minimal group delay, while its all-pass part corrects its phase response alone to correspond with the original system function.
The analysis in terms of poles and zeros is exact only in the case of transfer functions which can be expressed as ratios of polynomials. In the continuous-time case, such systems translate into networks of conventional, idealized LCR networks. In discrete time, they conveniently translate into approximations thereof, using addition, multiplication, and unit delay. It can be shown that in both cases, system functions of rational form with increasing order can be used to efficiently approximate any other system function; thus even system functions lacking a rational form, and so possessing an infinitude of poles and/or zeros, can in practice be implemented as efficiently as any other.
In the context of causal, stable systems, we would in theory be free to choose whether the zeros of the system function are outside of the stable range (to the right or outside) if the closure condition wasn't an issue. However, inversion is of great practical importance, just as theoretically perfect factorizations are in their own right. (Cf. the spectral symmetric/antisymmetric decomposition as another important example, leading e.g. to Hilbert transform techniques.) Many physical systems also naturally tend towards minimum-phase response, and sometimes have to be inverted using other physical systems obeying the same constraint.
Insight is given below as to why this system is called minimum-phase, and why the basic idea applies even when the system function cannot be cast into a rational form that could be implemented.
A system is invertible if we can uniquely determine its input from its output. I.e., we can find a system such that if we apply followed by , we obtain the identity system . (See Inverse matrix for a finite-dimensional analog). That is,
Suppose that is input to system and gives output :
Applying the inverse system to gives
So we see that the inverse system allows us to determine uniquely the input from the output .
Suppose that the system is a discrete-time, linear, time-invariant (LTI) system described by the impulse response for n in Z. Additionally, suppose has impulse response . The cascade of two LTI systems is a convolution. In this case, the above relation is the following: where is the Kronecker delta, or the identity system in the discrete-time case. (Changing the order of and is allowed because of commutativity of the convolution operation.) Note that this inverse system need not be unique.
When we impose the constraints of causality and stability, the inverse system is unique; and the system and its inverse are called minimum-phase. The causality and stability constraints in the discrete-time case are the following (for time-invariant systems where h is the system's impulse response, and is the ℓ1 norm):
and
and
See the article on stability for the analogous conditions for the continuous-time case.
Performing frequency analysis for the discrete-time case will provide some insight. The time-domain equation is
Applying the Z-transform gives the following relation in the z domain:
From this relation, we realize that
For simplicity, we consider only the case of a rational transfer function H(z). Causality and stability imply that all poles of H(z) must be strictly inside the unit circle (see stability). Suppose where A(z) and D(z) are polynomial in z. Causality and stability imply that the poles – the roots of D(z) – must be strictly inside the unit circle. We also know that so causality and stability for imply that its poles – the roots of A(z) – must be inside the unit circle. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the unit circle.
Analysis for the continuous-time case proceeds in a similar manner, except that we use the Laplace transform for frequency analysis. The time-domain equation is where is the Dirac delta function – the identity operator in the continuous-time case because of the sifting property with any signal x(t):
Applying the Laplace transform gives the following relation in the s-plane: from which we realize that
Again, for simplicity, we consider only the case of a rational transfer function H(s). Causality and stability imply that all poles of H(s) must be strictly inside the left-half s-plane (see stability). Suppose where A(s) and D(s) are polynomial in s. Causality and stability imply that the poles – the roots of D(s) – must be inside the left-half s-plane. We also know that so causality and stability for imply that its poles – the roots of A(s) – must be strictly inside the left-half s-plane. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the left-half s-plane.
A minimum-phase system, whether discrete-time or continuous-time, has an additional useful property that the natural logarithm of the magnitude of the frequency response (the "gain" measured in nepers, which is proportional to dB) is related to the phase angle of the frequency response (measured in radians) by the Hilbert transform. That is, in the continuous-time case, let be the complex frequency response of system H(s). Then, only for a minimum-phase system, the phase response of H(s) is related to the gain by where denotes the Hilbert transform, and, inversely,
Stated more compactly, let where and are real functions of a real variable. Then and
The Hilbert transform operator is defined to be
An equivalent corresponding relationship is also true for discrete-time minimum-phase systems.
For all causal and stable systems that have the same magnitude response, the minimum-phase system has its energy concentrated near the start of the impulse response. i.e., it minimizes the following function, which we can think of as the delay of energy in the impulse response:
For all causal and stable systems that have the same magnitude response, the minimum phase system has the minimum group delay. The following proof illustrates this idea of minimum group delay.
Suppose we consider one zero of the transfer function . Let's place this zero inside the unit circle () and see how the group delay is affected.
Since the zero contributes the factor to the transfer function, the phase contributed by this term is the following.
contributes the following to the group delay.
The denominator and are invariant to reflecting the zero outside of the unit circle, i.e., replacing with . However, by reflecting outside of the unit circle, we increase the magnitude of in the numerator. Thus, having inside the unit circle minimizes the group delay contributed by the factor . We can extend this result to the general case of more than one zero since the phase of the multiplicative factors of the form is additive. I.e., for a transfer function with zeros,
So, a minimum phase system with all zeros inside the unit circle minimizes the group delay since the group delay of each individual zero is minimized.
Systems that are causal and stable whose inverses are causal and unstable are known as non-minimum-phase systems. A given non-minimum phase system will have a greater phase contribution than the minimum-phase system with the equivalent magnitude response.
A maximum-phase system is the opposite of a minimum phase system. A causal and stable LTI system is a maximum-phase system if its inverse is causal and unstable.[ dubious – discuss ] That is,
Such a system is called a maximum-phase system because it has the maximum group delay of the set of systems that have the same magnitude response. In this set of equal-magnitude-response systems, the maximum phase system will have maximum energy delay.
For example, the two continuous-time LTI systems described by the transfer functions
have equivalent magnitude responses; however, the second system has a much larger contribution to the phase shift. Hence, in this set, the second system is the maximum-phase system and the first system is the minimum-phase system. These systems are also famously known as nonminimum-phase systems that raise many stability concerns in control. One recent solution to these systems is moving the RHP zeros to the LHP using the PFCD method. [3]
A mixed-phase system has some of its zeros inside the unit circle and has others outside the unit circle. Thus, its group delay is neither minimum or maximum but somewhere between the group delay of the minimum and maximum phase equivalent system.
For example, the continuous-time LTI system described by transfer function is stable and causal; however, it has zeros on both the left- and right-hand sides of the complex plane. Hence, it is a mixed-phase system. To control the transfer functions that include these systems some methods such as internal model controller (IMC), [4] generalized Smith's predictor (GSP) [5] and parallel feedforward control with derivative (PFCD) [6] are proposed.
A linear-phase system has constant group delay. Non-trivial linear phase or nearly linear phase systems are also mixed phase.
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .
In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: where k is a positive constant.
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
Chebyshev filters are analog or digital filters that have a steeper roll-off than Butterworth filters, and have either passband ripple or stopband ripple. Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the operating frequency range of the filter, but they achieve this with ripples in the passband. This type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Type I Chebyshev filters are usually referred to as "Chebyshev filters", while type II filters are usually called "inverse Chebyshev filters". Because of the passband ripple inherent in Chebyshev filters, filters with a smoother response in the passband but a more irregular response in the stopband are preferred for certain applications.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In physics, a wave vector is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave, and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.
In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform.
In physics and engineering, a phasor is a complex number representing a sinusoidal function whose amplitude, and initial phase are time-invariant and whose angular frequency is fixed. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as a phasor, or complex amplitude, and sinor or even complexor.
In signal processing, linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time by the same constant amount, which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another.
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function:
In mathematics, a π-system on a set is a collection of certain subsets of such that
A vacuum Rabi oscillation is a damped oscillation of an initially excited atom coupled to an electromagnetic resonator or cavity in which the atom alternately emits photon(s) into a single-mode electromagnetic cavity and reabsorbs them. The atom interacts with a single-mode field confined to a limited volume V in an optical cavity. Spontaneous emission is a consequence of coupling between the atom and the vacuum fluctuations of the cavity field.
Amplitude amplification is a technique in quantum computing which generalizes the idea behind Grover's search algorithm, and gives rise to a family of quantum algorithms. It was discovered by Gilles Brassard and Peter Høyer in 1997, and independently rediscovered by Lov Grover in 1998.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform. The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.
Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property. The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase-contrast microscopy, but on digital images. PST can be applied to digital images and temporal data. It is a physics-based feature engineering algorithm.
Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines", and extended by. RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature maps. It is used for datasets that are too large for traditional kernel methods like support vector machine, kernel ridge regression, and gaussian process.