Backstepping

Last updated

In control theory, backstepping is a technique developed circa 1990 by Myroslav Sparavalo, Petar V. Kokotovic, and others [1] [2] [3] for designing stabilizing controls for a special class of nonlinear dynamical systems. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this recursive structure, the designer can start the design process at the known-stable system and "back out" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as backstepping. [4]

Contents

Backstepping approach

The backstepping approach provides a recursive method for stabilizing the origin of a system in strict-feedback form. That is, consider a system of the form [4]

where

Also assume that the subsystem

is stabilized to the origin (i.e., ) by some known control such that . It is also assumed that a Lyapunov function for this stable subsystem is known. That is, this x subsystem is stabilized by some other method and backstepping extends its stability to the shell around it.

In systems of this strict-feedback form around a stable x subsystem,

The backstepping approach determines how to stabilize the x subsystem using , and then proceeds with determining how to make the next state drive to the control required to stabilize x. Hence, the process "steps backward" from x out of the strict-feedback form system until the ultimate control u is designed.

Recursive Control Design Overview

  1. It is given that the smaller (i.e., lower-order) subsystem
    is already stabilized to the origin by some control where . That is, choice of to stabilize this system must occur using some other method. It is also assumed that a Lyapunov function for this stable subsystem is known. Backstepping provides a way to extend the controlled stability of this subsystem to the larger system.
  2. A control is designed so that the system
    is stabilized so that follows the desired control. The control design is based on the augmented Lyapunov function candidate
    The control can be picked to bound away from zero.
  3. A control is designed so that the system
    is stabilized so that follows the desired control. The control design is based on the augmented Lyapunov function candidate
    The control can be picked to bound away from zero.
  4. This process continues until the actual u is known, and
    • The real control u stabilizes to fictitious control .
    • The fictitious control stabilizes to fictitious control .
    • The fictitious control stabilizes to fictitious control .
    • ...
    • The fictitious control stabilizes to fictitious control .
    • The fictitious control stabilizes to fictitious control .
    • The fictitious control stabilizes x to the origin.

This process is known as backstepping because it starts with the requirements on some internal subsystem for stability and progressively steps back out of the system, maintaining stability at each step. Because

then the resulting system has an equilibrium at the origin (i.e., where , , , ..., , and ) that is globally asymptotically stable.

Integrator Backstepping

Before describing the backstepping procedure for general strict-feedback form dynamical systems, it is convenient to discuss the approach for a smaller class of strict-feedback form systems. These systems connect a series of integrators to the input of a system with a known feedback-stabilizing control law, and so the stabilizing approach is known as integrator backstepping. With a small modification, the integrator backstepping approach can be extended to handle all strict-feedback form systems.

Single-integrator Equilibrium

Consider the dynamical system

 

 

 

 

(1)

where and is a scalar. This system is a cascade connection of an integrator with the x subsystem (i.e., the input u enters an integrator, and the integral enters the x subsystem).

We assume that , and so if , and , then

So the origin is an equilibrium (i.e., a stationary point) of the system. If the system ever reaches the origin, it will remain there forever after.

Single-integrator Backstepping

In this example, backstepping is used to stabilize the single-integrator system in Equation ( 1 ) around its equilibrium at the origin. To be less precise, we wish to design a control law that ensures that the states return to after the system is started from some arbitrary initial condition.

with has a Lyapunov function such that
where is a positive-definite function. That is, we assume that we have already shown that this existing simplerxsubsystem is stable (in the sense of Lyapunov). Roughly speaking, this notion of stability means that:
Our task is to find a control u that makes our cascaded system also stable. So we must find a new Lyapunov function candidate for this new system. That candidate will depend upon the control u, and by choosing the control properly, we can ensure that it is decaying everywhere as well.
which we can re-group to get
So our cascaded supersystem encapsulates the known-stable subsystem plus some error perturbation generated by the integrator.
Additionally, we let so that and
We seek to stabilize this error system by feedback through the new control . By stabilizing the system at , the state will track the desired control which will result in stabilizing the inner x subsystem.
So
By distributing , we see that
To ensure that (i.e., to ensure stability of the supersystem), we pick the control law
with , and so
After distributing the through,
So our candidate Lyapunov function is a true Lyapunov function, and our system is stable under this control law (which corresponds the control law because ). Using the variables from the original coordinate system, the equivalent Lyapunov function

 

 

 

 

(2)

As discussed below, this Lyapunov function will be used again when this procedure is applied iteratively to multiple-integrator problem.

 

 

 

 

(3)

The states x and and functions and come from the system. The function comes from our known-stable subsystem. The gain parameter affects the convergence rate or our system. Under this control law, our system is stable at the origin .
Recall that in Equation ( 3 ) drives the input of an integrator that is connected to a subsystem that is feedback-stabilized by the control law . Not surprisingly, the control has a term that will be integrated to follow the stabilizing control law plus some offset. The other terms provide damping to remove that offset and any other perturbation effects that would be magnified by the integrator.

So because this system is feedback stabilized by and has Lyapunov function with , it can be used as the upper subsystem in another single-integrator cascade system.

Motivating Example: Two-integrator Backstepping

Before discussing the recursive procedure for the general multiple-integrator case, it is instructive to study the recursion present in the two-integrator case. That is, consider the dynamical system

 

 

 

 

(4)

where and and are scalars. This system is a cascade connection of the single-integrator system in Equation ( 1 ) with another integrator (i.e., the input enters through an integrator, and the output of that integrator enters the system in Equation ( 1 ) by its input).

By letting

then the two-integrator system in Equation ( 4 ) becomes the single-integrator system

 

 

 

 

(5)

By the single-integrator procedure, the control law stabilizes the upper -to-y subsystem using the Lyapunov function , and so Equation ( 5 ) is a new single-integrator system that is structurally equivalent to the single-integrator system in Equation ( 1 ). So a stabilizing control can be found using the same single-integrator procedure that was used to find .

Many-integrator backstepping

In the two-integrator case, the upper single-integrator subsystem was stabilized yielding a new single-integrator system that can be similarly stabilized. This recursive procedure can be extended to handle any finite number of integrators. This claim can be formally proved with mathematical induction. Here, a stabilized multiple-integrator system is built up from subsystems of already-stabilized multiple-integrator subsystems.

that has scalar input and output states . Assume that
That is, if output states x are fed back to the input by the control law , then the output states (and the Lyapunov function) return to the origin after a single perturbation (e.g., after a nonzero initial condition or a sharp disturbance). This subsystem is stabilized by feedback control law .
This "cascade" system matches the form in Equation ( 1 ), and so the single-integrator backstepping procedure leads to the stabilizing control law in Equation ( 3 ). That is, if we feed back states and x to input according to the control law
with gain , then the states and x will return to and after a single perturbation. This subsystem is stabilized by feedback control law , and the corresponding Lyapunov function from Equation ( 2 ) is
That is, under feedback control law , the Lyapunov function decays to zero as the states return to the origin.
which is equivalent to the single-integrator system
Using these definitions of , , and , this system can also be expressed as
This system matches the single-integrator structure of Equation ( 1 ), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states , , and x to input according to the control law
with gain , then the states , , and x will return to , , and after a single perturbation. This subsystem is stabilized by feedback control law , and the corresponding Lyapunov function is
That is, under feedback control law , the Lyapunov function decays to zero as the states return to the origin.
which can be re-grouped as the single-integrator system
By the definitions of , , and from the previous step, this system is also represented by
Further, using these definitions of , , and , this system can also be expressed as
So the re-grouped system has the single-integrator structure of Equation ( 1 ), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states , , , and x to input according to the control law
with gain , then the states , , , and x will return to , , , and after a single perturbation. This subsystem is stabilized by feedback control law , and the corresponding Lyapunov function is
That is, under feedback control law , the Lyapunov function decays to zero as the states return to the origin.
has the recursive structure
and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator subsystem (i.e., with input and output x) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control u is known. At iteration i, the equivalent system is
The corresponding feedback-stabilizing control law is
with gain . The corresponding Lyapunov function is
By this construction, the ultimate control (i.e., ultimate control is found at final iteration ).

Hence, any system in this special many-integrator strict-feedback form can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).

Generic Backstepping

Systems in the special strict-feedback form have a recursive structure similar to the many-integrator system structure. Likewise, they are stabilized by stabilizing the smallest cascaded system and then backstepping to the next cascaded system and repeating the procedure. So it is critical to develop a single-step procedure; that procedure can be recursively applied to cover the many-step case. Fortunately, due to the requirements on the functions in the strict-feedback form, each single-step system can be rendered by feedback to a single-integrator system, and that single-integrator system can be stabilized using methods discussed above.

Single-step Procedure

Consider the simple strict-feedback system

 

 

 

 

(6)

where

Rather than designing feedback-stabilizing control directly, introduce a new control (to be designed later) and use control law

which is possible because . So the system in Equation ( 6 ) is

which simplifies to

This new -to-x system matches the single-integrator cascade system in Equation ( 1 ). Assuming that a feedback-stabilizing control law and Lyapunov function for the upper subsystem is known, the feedback-stabilizing control law from Equation ( 3 ) is

with gain . So the final feedback-stabilizing control law is

 

 

 

 

(7)

with gain . The corresponding Lyapunov function from Equation ( 2 ) is

 

 

 

 

(8)

Because this strict-feedback system has a feedback-stabilizing control and a corresponding Lyapunov function, it can be cascaded as part of a larger strict-feedback system, and this procedure can be repeated to find the surrounding feedback-stabilizing control.

Many-step Procedure

As in many-integrator backstepping, the single-step procedure can be completed iteratively to stabilize an entire strict-feedback system. In each step,

  1. The smallest "unstabilized" single-step strict-feedback system is isolated.
  2. Feedback is used to convert the system into a single-integrator system.
  3. The resulting single-integrator system is stabilized.
  4. The stabilized system is used as the upper system in the next step.

That is, any strict-feedback system

has the recursive structure

and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator subsystem (i.e., with input and output x) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control u is known. At iteration i, the equivalent system is

By Equation ( 7 ), the corresponding feedback-stabilizing control law is

with gain . By Equation ( 8 ), the corresponding Lyapunov function is

By this construction, the ultimate control (i.e., ultimate control is found at final iteration ). Hence, any strict-feedback system can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).

See also

Related Research Articles

In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry.

Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric tensor on M consists of a metric tensor at each point p of M that varies smoothly with p.

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.

In mathematics, the Heisenberg group, named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables and may also depend on the values of the input variables.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in pre-determined locations in the s-plane. Placing poles is desirable because the location of the poles corresponds directly to the eigenvalues of the system, which control the characteristics of the response of the system. The system must be considered controllable in order to implement this method.

<span class="mw-page-title-main">Feedback linearization</span> Approach used in controlling nonlinear systems

Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form

In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or asymptotically stable. Lyapunov stability means that if the system starts in a state in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to . A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state asymptotically by applying the control u.

Artstein's theorem states that a nonlinear dynamical system in the control-affine form

In control theory, dynamical systems are in strict-feedback form when they can be expressed as

In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.

<span class="mw-page-title-main">Derivations of the Lorentz transformations</span>

There are many ways to derive the Lorentz transformations using a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.

Input-to-state stability (ISS) is a stability notion widely used to study stability of nonlinear control systems with external inputs. Roughly speaking, a control system is ISS if it is globally asymptotically stable in the absence of external inputs and if its trajectories are bounded by a function of the size of the input for all sufficiently large times. The importance of ISS is due to the fact that the concept has bridged the gap between input–output and state-space methods, widely used within the control systems community.

Dynamic Substructuring (DS) is an engineering tool used to model and analyse the dynamics of mechanical systems by means of its components or substructures. Using the dynamic substructuring approach one is able to analyse the dynamic behaviour of substructures separately and to later on calculate the assembled dynamics using coupling procedures. Dynamic substructuring has several advantages over the analysis of the fully assembled system:

In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.

References

  1. Sparavalo, M. K. (1992). "A method of goal-oriented formation of the local topological structure of co-dimension one foliations for dynamic systems with control". Journal of Automation and Information Sciences. 25 (5): 1. ISSN   1064-2315.
  2. Kokotovic, P.V. (1992). "The joy of feedback: nonlinear and adaptive". IEEE Control Systems Magazine. 12 (3): 7–17. doi:10.1109/37.165507. S2CID   27196262.
  3. Lozano, R.; Brogliato, B. (1992). "Adaptive control of robot manipulators with flexible joints" (PDF). IEEE Transactions on Automatic Control. 37 (2): 174–181. doi:10.1109/9.121619.
  4. 1 2 Khalil, H.K. (2002). Nonlinear Systems (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN   978-0-13-067389-3.