Step response

Last updated
A typical step response for a second order system, illustrating overshoot, followed by ringing, all subsiding within a settling time. High accuracy settling time measurements figure 1.png
A typical step response for a second order system, illustrating overshoot, followed by ringing, all subsiding within a settling time.

The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.

Contents

From a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another.

Formal mathematical description

Figure 4: Black box representation of a dynamical system, its input and its step response. Step response.jpg
Figure 4: Black box representation of a dynamical system, its input and its step response.

This section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system : all notations and assumptions required for the following description are listed here.

Nonlinear dynamical system

For a general dynamical system, the step response is defined as follows:

It is the evolution function when the control inputs (or source term, or forcing inputs) are Heaviside functions: the notation emphasizes this concept showing H(t) as a subscript.

Linear dynamical system

For a linear time-invariant (LTI) black box, let for notational convenience: the step response can be obtained by convolution of the Heaviside step function control and the impulse response h(t) of the system itself

which for an LTI system is equivalent to just integrating the latter. Conversely, for an LTI system, the derivative of the step response yields the impulse response:

However, these simple relations are not true for a non-linear or time-variant system. [1]

Time domain versus frequency domain

Instead of frequency response, system performance may be specified in terms of parameters describing time-dependence of response. The step response can be described by the following quantities related to its time behavior,

In the case of linear dynamic systems, much can be inferred about the system from these characteristics. Below the step response of a simple two-pole amplifier is presented, and some of these terms are illustrated.

In LTI systems, the function that has the steepest slew rate that doesn't create overshoot or ringing is the Gaussian function. This is because it is the only function whose Fourier transform has the same shape.

Feedback amplifiers

Figure 1: Ideal negative feedback model; open loop gain is AOL and feedback factor is b. Block Diagram for Feedback.svg
Figure 1: Ideal negative feedback model; open loop gain is AOL and feedback factor is β.

This section describes the step response of a simple negative feedback amplifier shown in Figure 1. The feedback amplifier consists of a main open-loop amplifier of gain AOL and a feedback loop governed by a feedback factor β. This feedback amplifier is analyzed to determine how its step response depends upon the time constants governing the response of the main amplifier, and upon the amount of feedback used.

A negative-feedback amplifier has gain given by (see negative feedback amplifier):

where AOL = open-loop gain, AFB = closed-loop gain (the gain with negative feedback present) and β = feedback factor.

With one dominant pole

In many cases, the forward amplifier can be sufficiently well modeled in terms of a single dominant pole of time constant τ, that it, as an open-loop gain given by:

with zero-frequency gain A0 and angular frequency ω = 2πf. This forward amplifier has unit step response

,

an exponential approach from 0 toward the new equilibrium value of A0.

The one-pole amplifier's transfer function leads to the closed-loop gain:

This closed-loop gain is of the same form as the open-loop gain: a one-pole filter. Its step response is of the same form: an exponential decay toward the new equilibrium value. But the time constant of the closed-loop step function is τ / (1 + βA0), so it is faster than the forward amplifier's response by a factor of 1 + βA0:

As the feedback factor β is increased, the step response will get faster, until the original assumption of one dominant pole is no longer accurate. If there is a second pole, then as the closed-loop time constant approaches the time constant of the second pole, a two-pole analysis is needed.

Two-pole amplifiers

In the case that the open-loop gain has two poles (two time constants, τ1, τ2), the step response is a bit more complicated. The open-loop gain is given by:

with zero-frequency gain A0 and angular frequency ω = 2πf.

Analysis

The two-pole amplifier's transfer function leads to the closed-loop gain:

Figure 2: Conjugate pole locations for a two-pole feedback amplifier; Re(s) is the real axis and Im(s) is the imaginary axis. Conjugate poles in s-plane.svg
Figure 2: Conjugate pole locations for a two-pole feedback amplifier; Re(s) is the real axis and Im(s) is the imaginary axis.

The time dependence of the amplifier is easy to discover by switching variables to s = jω, whereupon the gain becomes:

The poles of this expression (that is, the zeros of the denominator) occur at:

which shows for large enough values of βA0 the square root becomes the square root of a negative number, that is the square root becomes imaginary, and the pole positions are complex conjugate numbers, either s+ or s; see Figure 2:

with

and

Using polar coordinates with the magnitude of the radius to the roots given by |s| (Figure 2):

and the angular coordinate φ is given by:

Tables of Laplace transforms show that the time response of such a system is composed of combinations of the two functions:

which is to say, the solutions are damped oscillations in time. In particular, the unit step response of the system is: [2]

which simplifies to

when A0 tends to infinity and the feedback factor β is one.

Notice that the damping of the response is set by ρ, that is, by the time constants of the open-loop amplifier. In contrast, the frequency of oscillation is set by μ, that is, by the feedback parameter through βA0. Because ρ is a sum of reciprocals of time constants, it is interesting to notice that ρ is dominated by the shorter of the two.

Results

Figure 3: Step-response of a linear two-pole feedback amplifier; time is in units of 1/r, that is, in terms of the time constants of AOL; curves are plotted for three values of mu = m, which is controlled by b. Step response for two-pole feedback amplifier.PNG
Figure 3: Step-response of a linear two-pole feedback amplifier; time is in units of 1/ρ, that is, in terms of the time constants of AOL; curves are plotted for three values of mu = μ, which is controlled by β.

Figure 3 shows the time response to a unit step input for three values of the parameter μ. It can be seen that the frequency of oscillation increases with μ, but the oscillations are contained between the two asymptotes set by the exponentials [ 1  exp(−ρt) ] and [ 1 + exp(−ρt) ]. These asymptotes are determined by ρ and therefore by the time constants of the open-loop amplifier, independent of feedback.

The phenomenon of oscillation about the final value is called ringing . The overshoot is the maximum swing above final value, and clearly increases with μ. Likewise, the undershoot is the minimum swing below final value, again increasing with μ. The settling time is the time for departures from final value to sink below some specified level, say 10% of final value.

The dependence of settling time upon μ is not obvious, and the approximation of a two-pole system probably is not accurate enough to make any real-world conclusions about feedback dependence of settling time. However, the asymptotes [ 1  exp(−ρt) ] and [ 1 + exp (−ρt) ] clearly impact settling time, and they are controlled by the time constants of the open-loop amplifier, particularly the shorter of the two time constants. That suggests that a specification on settling time must be met by appropriate design of the open-loop amplifier.

The two major conclusions from this analysis are:

  1. Feedback controls the amplitude of oscillation about final value for a given open-loop amplifier and given values of open-loop time constants, τ1 and τ2.
  2. The open-loop amplifier decides settling time. It sets the time scale of Figure 3, and the faster the open-loop amplifier, the faster this time scale.

As an aside, it may be noted that real-world departures from this linear two-pole model occur due to two major complications: first, real amplifiers have more than two poles, as well as zeros; and second, real amplifiers are nonlinear, so their step response changes with signal amplitude.

Figure 4: Step response for three values of a. Top: a  = 4; Center: a = 2; Bottom: a = 0.5. As a is reduced the pole separation reduces, and the overshoot increases. Overshoot control.PNG
Figure 4: Step response for three values of α. Top: α  = 4; Center: α = 2; Bottom: α = 0.5. As α is reduced the pole separation reduces, and the overshoot increases.

Control of overshoot

How overshoot may be controlled by appropriate parameter choices is discussed next.

Using the equations above, the amount of overshoot can be found by differentiating the step response and finding its maximum value. The result for maximum step response Smax is: [3]

The final value of the step response is 1, so the exponential is the actual overshoot itself. It is clear the overshoot is zero if μ = 0, which is the condition:

This quadratic is solved for the ratio of time constants by setting x = (τ1 / τ2)1/2 with the result

Because β A0 ≫ 1, the 1 in the square root can be dropped, and the result is

In words, the first time constant must be much larger than the second. To be more adventurous than a design allowing for no overshoot we can introduce a factor α in the above relation:

and let α be set by the amount of overshoot that is acceptable.

Figure 4 illustrates the procedure. Comparing the top panel (α = 4) with the lower panel (α = 0.5) shows lower values for α increase the rate of response, but increase overshoot. The case α = 2 (center panel) is the maximally flat design that shows no peaking in the Bode gain vs. frequency plot. That design has the rule of thumb built-in safety margin to deal with non-ideal realities like multiple poles (or zeros), nonlinearity (signal amplitude dependence) and manufacturing variations, any of which can lead to too much overshoot. The adjustment of the pole separation (that is, setting α) is the subject of frequency compensation, and one such method is pole splitting.

Control of settling time

The amplitude of ringing in the step response in Figure 3 is governed by the damping factor exp(−ρt). That is, if we specify some acceptable step response deviation from final value, say Δ, that is:

this condition is satisfied regardless of the value of β AOL provided the time is longer than the settling time, say tS, given by: [4]

where the τ1  τ2 is applicable because of the overshoot control condition, which makes τ1 = αβAOL τ2. Often the settling time condition is referred to by saying the settling period is inversely proportional to the unity gain bandwidth, because 1/(2π τ2) is close to this bandwidth for an amplifier with typical dominant pole compensation. However, this result is more precise than this rule of thumb. As an example of this formula, if Δ = 1/e4 = 1.8 %, the settling time condition is tS = 8 τ2.

In general, control of overshoot sets the time constant ratio, and settling time tS sets τ2. [5] [6] [7]

System Identification using the Step Response: System with two real poles

Step response of the system with
x
(
t
)
=
1
{\displaystyle x(t)=1}
. Measure the significant point
k
{\displaystyle k}
,
t
25
{\displaystyle t_{25}}
and
t
75
{\displaystyle t_{75}}
. PT2 System Step-Response Diagram with required Measurements (2018).png
Step response of the system with . Measure the significant point , and .

This method uses significant points of the step response. There is no need to guess tangents to the measured Signal. The equations are derived using numerical simulations, determining some significant ratios and fitting parameters of nonlinear equations. See also. [8]

Here the steps:

  • Measure the system step-response of the system with an input step signal .
  • Determine the time-spans and where the step response reaches 25% and 75% of the steady state output value.
  • Determine the system steady-state gain with
  • Calculate
  • Determine the two time constants
  • Calculate the transfer function of the identified system within the Laplace-domain

Phase margin

Figure 5: Bode gain plot to find phase margin; scales are logarithmic, so labeled separations are multiplicative factors. For example, f0 dB = bA0 x f1. Phase for Step Response.PNG
Figure 5: Bode gain plot to find phase margin; scales are logarithmic, so labeled separations are multiplicative factors. For example, f0 dB = βA0 × f1.

Next, the choice of pole ratio τ1/τ2 is related to the phase margin of the feedback amplifier. [9] The procedure outlined in the Bode plot article is followed. Figure 5 is the Bode gain plot for the two-pole amplifier in the range of frequencies up to the second pole position. The assumption behind Figure 5 is that the frequency f0 dB lies between the lowest pole at f1 = 1/(2πτ1) and the second pole at f2 = 1/(2πτ2). As indicated in Figure 5, this condition is satisfied for values of α  1.

Using Figure 5 the frequency (denoted by f0 dB) is found where the loop gain βA0 satisfies the unity gain or 0 dB condition, as defined by:

The slope of the downward leg of the gain plot is (20 dB/decade); for every factor of ten increase in frequency, the gain drops by the same factor:

The phase margin is the departure of the phase at f0 dB from −180°. Thus, the margin is:

Because f0 dB / f1 = βA0  1, the term in f1 is 90°. That makes the phase margin:

In particular, for case α = 1, φm = 45°, and for α = 2, φm = 63.4°. Sansen [10] recommends α = 3, φm = 71.6° as a "good safety position to start with".

If α is increased by shortening τ2, the settling time tS also is shortened. If α is increased by lengthening τ1, the settling time tS is little altered. More commonly, both τ1andτ2 change, for example if the technique of pole splitting is used.

As an aside, for an amplifier with more than two poles, the diagram of Figure 5 still may be made to fit the Bode plots by making f2 a fitting parameter, referred to as an "equivalent second pole" position. [11]

See also

References and notes

  1. Yuriy Shmaliy (2007). Continuous-Time Systems . Springer Science & Business Media. p.  46. ISBN   978-1-4020-6272-8.
  2. Benjamin C Kuo & Golnaraghi F (2003). Automatic control systems (Eighth ed.). New York: Wiley. p. 253. ISBN   0-471-13476-7.
  3. Benjamin C Kuo & Golnaraghi F (2003). p. 259. Wiley. ISBN   0-471-13476-7.
  4. This estimate is a bit conservative (long) because the factor 1 /sin(φ) in the overshoot contribution to S (t) has been replaced by 1 /sin(φ) ≈ 1.
  5. David A. Johns & Martin K W (1997). Analog integrated circuit design. New York: Wiley. pp. 234–235. ISBN   0-471-14448-7.
  6. Willy M C Sansen (2006). Analog design essentials. Dordrecht, The Netherlands: Springer. p. §0528 p. 163. ISBN   0-387-25746-2.
  7. According to Johns and Martin, op. cit., settling time is significant in switched-capacitor circuits, for example, where an op amp settling time must be less than half a clock period for sufficiently rapid charge transfer.
  8. "Identification of a damped PT2 system | Hackaday.io". hackaday.io. Retrieved 2018-08-06.
  9. The gain margin of the amplifier cannot be found using a two-pole model, because gain margin requires determination of the frequency f180 where the gain flips sign, and this never happens in a two-pole system. If we know f180 for the amplifier at hand, the gain margin can be found approximately, but f180 then depends on the third and higher pole positions, as does the gain margin, unlike the estimate of phase margin, which is a two-pole estimate.
  10. Willy M C Sansen (2006-11-30). §0526 p. 162. Springer. ISBN   0-387-25746-2.
  11. Gaetano Palumbo & Pennisi S (2002). Feedback amplifiers: theory and design. Boston/Dordrecht/London: Kluwer Academic Press. pp. § 4.4 pp. 97–98. ISBN   0-7923-7643-9.

Further reading

Related Research Articles

<span class="mw-page-title-main">Kaluza–Klein theory</span> Unified field theory

In physics, Kaluza–Klein theory is a classical unified field theory of gravitation and electromagnetism built around the idea of a fifth dimension beyond the common 4D of space and time and considered an important precursor to string theory. In their setup, the vacuum has the usual 3 dimensions of space and one dimension of time but with another microscopic extra spatial dimension in the shape of a tiny circle. Gunnar Nordström had an earlier, similar idea. But in that case, a fifth component was added to the electromagnetic vector potential, representing the Newtonian gravitational potential, and writing the Maxwell equations in five dimensions.

In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold. It is a local invariant of Riemannian metrics which measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

<span class="mw-page-title-main">Anti-de Sitter space</span> Maximally symmetric Lorentzian manifold with a negative cosmological constant

In mathematics and physics, n-dimensional anti-de Sitter space (AdSn) is a maximally symmetric Lorentzian manifold with constant negative scalar curvature. Anti-de Sitter space and de Sitter space are named after Willem de Sitter (1872–1934), professor of astronomy at Leiden University and director of the Leiden Observatory. Willem de Sitter and Albert Einstein worked together closely in Leiden in the 1920s on the spacetime structure of the universe. Paul Dirac was the first person to rigorously explore anti-de Sitter space, doing so in 1963.

In the general theory of relativity, the Einstein field equations relate the geometry of spacetime to the distribution of matter within it.

<span class="mw-page-title-main">Propagator</span> Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:

In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.

In general relativity, if two objects are set in motion along two initially parallel trajectories, the presence of a tidal gravitational force will cause the trajectories to bend towards or away from each other, producing a relative acceleration between the objects.

<span class="mw-page-title-main">KMS state</span> Type of state in thermal systems

In the statistical mechanics of quantum mechanical systems and quantum field theory, the properties of a system in thermal equilibrium can be described by a mathematical object called a Kubo–Martin–Schwinger (KMS) state: a state satisfying the KMS condition.

A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.

<span class="mw-page-title-main">Post-Newtonian expansion</span> Method of approximation in general relativity

In general relativity, post-Newtonian expansions are used for finding an approximate solution of Einstein field equations for the metric tensor. The approximations are expanded in small parameters that express orders of deviations from Newton's law of universal gravitation. This allows approximations to Einstein's equations to be made in the case of weak fields. Higher-order terms can be added to increase accuracy, but for strong fields sometimes it is preferable to solve the complete equations numerically. This method is a common mark of effective field theories. In the limit, when the small parameters are equal to 0, the post-Newtonian expansion reduces to Newton's law of gravity.

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars which encode the ten independent components of the Weyl tensor of a four-dimensional spacetime.

In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.

In probability theory and statistics, the normal-gamma distribution is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.

Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.

In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven Ricci scalars which consist of three real scalars , three complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.

<span class="mw-page-title-main">Green's law</span> Equation describing evolution of waves in shallow water

In fluid dynamics, Green's law, named for 19th-century British mathematician George Green, is a conservation law describing the evolution of non-breaking, surface gravity waves propagating in shallow water of gradually varying depth and width. In its simplest form, for wavefronts and depth contours parallel to each other, it states: