Bode's sensitivity integral

Last updated
Block diagram of feedback control of a dynamical process. Bode sensitivity integral block diagram.png
Block diagram of feedback control of a dynamical process.

Bode's sensitivity integral, discovered by Hendrik Wade Bode, is a formula that quantifies some of the limitations in feedback control of linear parameter invariant systems. Let L be the loop transfer function and S be the sensitivity function.

Contents

In the diagram, P is a dynamical process that has a transfer function P(s). The controller, C, has the transfer function C(s). The controller attempts to cause the process output, y, to track the reference input, r. Disturbances, d, and measurement noise, n, may cause undesired deviations of the output. Loop gain is defined by L(s) = P(s)C(s).

The following holds:

where are the poles of L in the right half plane (unstable poles).

If L has at least two more poles than zeros, and has no poles in the right half plane (is stable), the equation simplifies to:

This equality shows that if sensitivity to disturbance is suppressed at some frequency range, it is necessarily increased at some other range. This has been called the "waterbed effect." [1]

Related Research Articles

In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication. For suitable functions f, the Laplace transform is the integral

In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. They are widely used in electronic engineering tools like circuit simulators and control systems. In some simple cases, this function can be represented as two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics and mathematics, the Fourier transform (FT) is a transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into terms of the intensity of its constituent pitches.

<span class="mw-page-title-main">Integration by parts</span> Mathematical method in calculus

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

<span class="mw-page-title-main">Bode plot</span> Graph of the frequency response of a control system

In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude of the frequency response, and a Bode phase plot, expressing the phase shift.

The omega constant is a mathematical constant defined as the unique real number that satisfies the equation

<span class="mw-page-title-main">Step response</span>

The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In electronics, when describing a voltage or current step function, rise time is the time taken by a signal to change from a specified low value to a specified high value. These values may be expressed as ratios or, equivalently, as percentages with respect to a given reference value. In analog electronics and digital electronics, these percentages are commonly the 10% and 90% of the output step height: however, other values are commonly used. For applications in control theory, according to Levine, rise time is defined as "the time required for the response to rise from x% to y% of its final value", with 0% to 100% rise time common for underdamped second order systems, 5% to 95% for critically damped and 10% to 90% for overdamped ones. According to Orwiler, the term "rise time" applies to either positive or negative step response, even if a displayed negative excursion is popularly termed fall time.

<span class="mw-page-title-main">Dirichlet integral</span> Integral of sin(x)/x from 0 to infinity.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.

<span class="mw-page-title-main">Nyquist stability criterion</span> Graphical method of determining the stability of a dynamical system

In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer Felix Strecker at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system.

The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. The relations are often used to compute the real part from the imaginary part of response functions in physical systems, because for stable systems, causality implies the condition of analyticity, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics, these relations are known by the names Sokhotski–Plemelj theorem and Hilbert transform.

<span class="mw-page-title-main">Leibniz integral rule</span> Differentiation under the integral sign formula

In calculus, the Leibniz integral rule for differentiation under the integral sign states that for an integral of the form

The Frank–Tamm formula yields the amount of Cherenkov radiation emitted on a given frequency as a charged particle moves through a medium at superluminal velocity. It is named for Russian physicists Ilya Frank and Igor Tamm who developed the theory of the Cherenkov effect in 1937, for which they were awarded a Nobel Prize in Physics in 1958.

In control engineering, the sensitivity of a control system measures how variations in the plant parameters affects the closed-loop transfer function. Since the controller parameters are typically matched to the process characteristics and the process may change, it is important that the controller parameters are chosen in such a way that the closed loop system is not sensitive to variations in process dynamics. Moreover, the sensitivity function is also important to analyse how disturbances affects the system.

Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.

Fourier amplitude sensitivity testing (FAST) is a variance-based global sensitivity analysis method. The sensitivity value is defined based on conditional variances which indicate the individual or joint effects of the uncertain inputs on the output.

In thermal quantum field theory, the Matsubara frequency summation is the summation over discrete imaginary frequencies. It takes the following form

The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform.
The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.
The simplest way to derive the spectrum of a chirp, now that computers are widely available, is to sample the time-domain waveform at a frequency well above the Nyquist limit and call up an FFT algorithm to obtain the desired result. As this approach was not an option for the early designers, they resorted to analytic analysis, where possible, or to graphical or approximation methods, otherwise. These early methods still remain helpful, however, as they give additional insight into the behavior and properties of chirps.

References

Further reading

See also