In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal (or more rigorously, a set-valued control signal) that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode [1] and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.
Figure 1 shows an example trajectory of a system under sliding mode control. The sliding surface is described by , and the sliding mode along the surface commences after the finite time when system trajectories have reached the surface. In the theoretical description of sliding modes, the system stays confined to the sliding surface and need only be viewed as sliding along the surface. However, real implementations of sliding mode control approximate this theoretical behavior with a high-frequency and generally non-deterministic switching control signal that causes the system to "chatter" [nb 1] in a tight neighborhood of the sliding surface. Chattering can be reduced through the use of deadbands or boundary layers around the sliding surface, or other compensatory methods. Although the system is nonlinear in general, the idealized (i.e., non-chattering) behavior of the system in Figure 1 when confined to the surface is an LTI system with an exponentially stable origin. One of the compensatory methods is the adaptive sliding mode control method proposed in [2] [3] which uses estimated uncertainty to construct continuous control law. In this method chattering is eliminated while preserving accuracy (for more details see references [2] and [3]). The three distinguished features of the proposed adaptive sliding mode controller are as follows: (i) The structured (or parametric) uncertainties and unstructured uncertainties (un-modeled dynamics, unknown external disturbances) are synthesized into a single type uncertainty term called lumped uncertainty. Therefore, a linearly parameterized dynamic model of the system is not required, and the simple structure and computationally efficient properties of this approach make it suitable for the real-time control applications. (ii) The adaptive sliding mode control scheme design relies on the online estimated uncertainty vector rather than relying on the worst-case scenario (i.e., bounds of uncertainties). Therefore, a-priory knowledge of the bounds of uncertainties is not required, and at each time instant, the control input compensates for the uncertainty that exists. (iii) The developed continuous control law using fundamentals of the sliding mode control theory eliminates the chattering phenomena without trade-off between performance and robustness, which is prevalent in boundary-layer approach.
Intuitively, sliding mode control uses practically infinite gain to force the trajectories of a dynamic system to slide along the restricted sliding mode subspace. Trajectories from this reduced-order sliding mode have desirable properties (e.g., the system naturally slides along it until it comes to rest at a desired equilibrium). The main strength of sliding mode control is its robustness. Because the control can be as simple as a switching between two states (e.g., "on"/"off" or "forward"/"reverse"), it need not be precise and will not be sensitive to parameter variations that enter into the control channel. Additionally, because the control law is not a continuous function, the sliding mode can be reached in finite time (i.e., better than asymptotic behavior). Under certain common conditions, optimality requires the use of bang–bang control; hence, sliding mode control describes the optimal controller for a broad set of dynamic systems.
One application of sliding mode controller is the control of electric drives operated by switching power converters. [4] : "Introduction" Because of the discontinuous operating mode of those converters, a discontinuous sliding mode controller is a natural implementation choice over continuous controllers that may need to be applied by means of pulse-width modulation or a similar technique [nb 2] of applying a continuous signal to an output that can only take discrete states. Sliding mode control has many applications in robotics. In particular, this control algorithm has been used for tracking control of unmanned surface vessels in simulated rough seas with high degree of success. [5] [6]
Sliding mode control must be applied with more care than other forms of nonlinear control that have more moderate control action. In particular, because actuators have delays and other imperfections, the hard sliding-mode-control action can lead to chatter, energy loss, plant damage, and excitation of unmodeled dynamics. [7] : 554–556 Continuous control design methods are not as susceptible to these problems and can be made to mimic sliding-mode controllers. [7] : 556–563
Consider a nonlinear dynamical system described by
|
where
is an n-dimensional state vector and
is an m-dimensional input vector that will be used for state feedback. The functions and are assumed to be continuous and sufficiently smooth so that the Picard–Lindelöf theorem can be used to guarantee that solution to Equation ( 1 ) exists and is unique.
A common task is to design a state-feedback control law (i.e., a mapping from current state at time t to the input ) to stabilize the dynamical system in Equation ( 1 ) around the origin . That is, under the control law, whenever the system is started away from the origin, it will return to it. For example, the component of the state vector may represent the difference some output is away from a known signal (e.g., a desirable sinusoidal signal); if the control can ensure that quickly returns to , then the output will track the desired sinusoid. In sliding-mode control, the designer knows that the system behaves desirably (e.g., it has a stable equilibrium) provided that it is constrained to a subspace of its configuration space. Sliding mode control forces the system trajectories into this subspace and then holds them there so that they slide along it. This reduced-order subspace is referred to as a sliding (hyper)surface, and when closed-loop feedback forces trajectories to slide along it, it is referred to as a sliding mode of the closed-loop system. Trajectories along this subspace can be likened to trajectories along eigenvectors (i.e., modes) of LTI systems; however, the sliding mode is enforced by creasing the vector field with high-gain feedback. Like a marble rolling along a crack, trajectories are confined to the sliding mode.
The sliding-mode control scheme involves
Because sliding mode control laws are not continuous, it has the ability to drive trajectories to the sliding mode in finite time (i.e., stability of the sliding surface is better than asymptotic). However, once the trajectories reach the sliding surface, the system takes on the character of the sliding mode (e.g., the origin may only have asymptotic stability on this surface).
The sliding-mode designer picks a switching function that represents a kind of "distance" that the states are away from a sliding surface.
The sliding-mode-control law switches from one state to another based on the sign of this distance. So the sliding-mode control acts like a stiff pressure always pushing in the direction of the sliding mode where . Desirable trajectories will approach the sliding surface, and because the control law is not continuous (i.e., it switches from one state to another as trajectories move across this surface), the surface is reached in finite time. Once a trajectory reaches the surface, it will slide along it and may, for example, move toward the origin. So the switching function is like a topographic map with a contour of constant height along which trajectories are forced to move.
The sliding (hyper)surface/manifold is typically of dimension where n is the number of states in and m is the number of input signals (i.e., control signals) in . For each control index , there is an -dimensional sliding surface given by
|
The vital part of SMC design is to choose a control law so that the sliding mode (i.e., this surface given by ) exists and is reachable along system trajectories. The principle of sliding mode control is to forcibly constrain the system, by suitable control strategy, to stay on the sliding surface on which the system will exhibit desirable features. When the system is constrained by the sliding control to stay on the sliding surface, the system dynamics are governed by reduced-order system obtained from Equation ( 2 ).
To force the system states to satisfy , one must:
Note that because the control law is not continuous, it is certainly not locally Lipschitz continuous, and so existence and uniqueness of solutions to the closed-loop system is not guaranteed by the Picard–Lindelöf theorem. Thus the solutions are to be understood in the Filippov sense. [1] [8] Roughly speaking, the resulting closed-loop system moving along is approximated by the smooth dynamics however, this smooth behavior may not be truly realizable. Similarly, high-speed pulse-width modulation or delta-sigma modulation produces outputs that only assume two states, but the effective output swings through a continuous range of motion. These complications can be avoided by using a different nonlinear control design method that produces a continuous controller. In some cases, sliding-mode control designs can be approximated by other continuous control designs. [7]
The following theorems form the foundation of variable structure control.
Consider a Lyapunov function candidate
|
where is the Euclidean norm (i.e., is the distance away from the sliding manifold where ). For the system given by Equation ( 1 ) and the sliding surface given by Equation ( 2 ), a sufficient condition for the existence of a sliding mode is that
in a neighborhood of the surface given by .
Roughly speaking (i.e., for the scalar control case when ), to achieve , the feedback control law is picked so that and have opposite signs. That is,
Note that
and so the feedback control law has a direct impact on .
To ensure that the sliding mode is attained in finite time, must be more strongly bounded away from zero. That is, if it vanishes too quickly, the attraction to the sliding mode will only be asymptotic. To ensure that the sliding mode is entered in finite time, [9]
where and are constants.
This condition ensures that for the neighborhood of the sliding mode ,
So, for ,
which, by the chain rule (i.e., with ), means
where is the upper right-hand derivative of and the symbol denotes proportionality. So, by comparison to the curve which is represented by differential equation with initial condition , it must be the case that for all t. Moreover, because , must reach in finite time, which means that V must reach (i.e., the system enters the sliding mode) in finite time. [7] Because is proportional to the Euclidean norm of the switching function , this result implies that the rate of approach to the sliding mode must be firmly bounded away from zero.
In the context of sliding mode control, this condition means that
where is the Euclidean norm. For the case when switching function is scalar valued, the sufficient condition becomes
Taking , the scalar sufficient condition becomes
which is equivalent to the condition that
That is, the system should always be moving toward the switching surface , and its speed toward the switching surface should have a non-zero lower bound. So, even though may become vanishingly small as approaches the surface, must always be bounded firmly away from zero. To ensure this condition, sliding mode controllers are discontinuous across the manifold; they switch from one non-zero value to another as trajectories cross the manifold.
For the system given by Equation ( 1 ) and sliding surface given by Equation ( 2 ), the subspace for which the surface is reachable is given by
That is, when initial conditions come entirely from this space, the Lyapunov function candidate is a Lyapunov function and trajectories are sure to move toward the sliding mode surface where . Moreover, if the reachability conditions from Theorem 1 are satisfied, the sliding mode will enter the region where is more strongly bounded away from zero in finite time. Hence, the sliding mode will be attained in finite time.
Let
be nonsingular. That is, the system has a kind of controllability that ensures that there is always a control that can move a trajectory to move closer to the sliding mode. Then, once the sliding mode where is achieved, the system will stay on that sliding mode. Along sliding mode trajectories, is constant, and so sliding mode trajectories are described by the differential equation
If an -equilibrium is stable with respect to this differential equation, then the system will slide along the sliding mode surface toward the equilibrium.
The equivalent control law on the sliding mode can be found by solving
for the equivalent control law . That is,
and so the equivalent control
That is, even though the actual control is not continuous, the rapid switching across the sliding mode where forces the system to act as if it were driven by this continuous control.
Likewise, the system trajectories on the sliding mode behave as if
The resulting system matches the sliding mode differential equation
, the sliding mode surface , and the trajectory conditions from the reaching phase now reduce to the above derived simpler condition. Hence, the system can be assumed to follow the simpler condition after some initial transient during the period while the system finds the sliding mode. The same motion is approximately maintained when the equality only approximately holds.
It follows from these theorems that the sliding motion is invariant (i.e., insensitive) to sufficiently small disturbances entering the system through the control channel. That is, as long as the control is large enough to ensure that and is uniformly bounded away from zero, the sliding mode will be maintained as if there was no disturbance. The invariance property of sliding mode control to certain disturbances and model uncertainties is its most attractive feature; it is strongly robust.
As discussed in an example below, a sliding mode control law can keep the constraint
in order to asymptotically stabilize any system of the form
when has a finite upper bound. In this case, the sliding mode is where
(i.e., where ). That is, when the system is constrained this way, it behaves like a simple stable linear system, and so it has a globally exponentially stable equilibrium at the origin.
|
|
Although various theories exist for sliding mode control system design, there is a lack of a highly effective design methodology due to practical difficulties encountered in analytical and numerical methods. A reusable computing paradigm such as a genetic algorithm can, however, be utilized to transform a 'unsolvable problem' of optimal design into a practically solvable 'non-deterministic polynomial problem'. This results in computer-automated designs for sliding model control. [10]
Sliding mode control can be used in the design of state observers. These non-linear high-gain observers have the ability to bring coordinates of the estimator error dynamics to zero in finite time. Additionally, switched-mode observers have attractive measurement noise resilience that is similar to a Kalman filter. [11] [12] For simplicity, the example here uses a traditional sliding mode modification of a Luenberger observer for an LTI system. In these sliding mode observers, the order of the observer dynamics are reduced by one when the system enters the sliding mode. In this particular example, the estimator error for a single estimated state is brought to zero in finite time, and after that time the other estimator errors decay exponentially to zero. However, as first described by Drakunov, [13] a sliding mode observer for non-linear systems can be built that brings the estimation error for all estimated states to zero in a finite (and arbitrarily small) time.
Here, consider the LTI system
where state vector , is a vector of inputs, and output y is a scalar equal to the first state of the state vector. Let
where
The goal is to design a high-gain state observer that estimates the state vector using only information from the measurement . Hence, let the vector be the estimates of the n states. The observer takes the form
where is a nonlinear function of the error between estimated state and the output , and is an observer gain vector that serves a similar purpose as in the typical linear Luenberger observer. Likewise, let
where is a column vector. Additionally, let be the state estimator error. That is, . The error dynamics are then
where is the estimator error for the first state estimate. The nonlinear control law v can be designed to enforce the sliding manifold
so that estimate tracks the real state after some finite time (i.e., ). Hence, the sliding mode control switching function
To attain the sliding manifold, and must always have opposite signs (i.e., for essentially all ). However,
where is the collection of the estimator errors for all of the unmeasured states. To ensure that , let
where
That is, positive constant M must be greater than a scaled version of the maximum possible estimator errors for the system (i.e., the initial errors, which are assumed to be bounded so that M can be picked large enough; al). If M is sufficiently large, it can be assumed that the system achieves (i.e., ). Because is constant (i.e., 0) along this manifold, as well. Hence, the discontinuous control may be replaced with the equivalent continuous control where
So
This equivalent control represents the contribution from the other states to the trajectory of the output state . In particular, the row acts like an output vector for the error subsystem
So, to ensure the estimator error for the unmeasured states converges to zero, the vector must be chosen so that the matrix is Hurwitz (i.e., the real part of each of its eigenvalues must be negative). Hence, provided that it is observable, this system can be stabilized in exactly the same way as a typical linear state observer when is viewed as the output matrix (i.e., "C"). That is, the equivalent control provides measurement information about the unmeasured states that can continually move their estimates asymptotically closer to them. Meanwhile, the discontinuous control forces the estimate of the measured state to have zero error in finite time. Additionally, white zero-mean symmetric measurement noise (e.g., Gaussian noise) only affects the switching frequency of the control v, and hence the noise will have little effect on the equivalent sliding mode control . Hence, the sliding mode observer has Kalman filter –like features. [12]
The final version of the observer is thus
where
That is, by augmenting the control vector with the switching function , the sliding mode observer can be implemented as an LTI system. That is, the discontinuous signal is viewed as a control input to the 2-input LTI system.
For simplicity, this example assumes that the sliding mode observer has access to a measurement of a single state (i.e., output ). However, a similar procedure can be used to design a sliding mode observer for a vector of weighted combinations of states (i.e., when output uses a generic matrix C). In each case, the sliding mode will be the manifold where the estimated output follows the measured output with zero error (i.e., the manifold where ).
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In mathematics, the exterior algebra or Grassmann algebra of a vector space is an associative algebra that contains which has a product, called exterior product or wedge product and denoted with , such that for every vector in The exterior algebra is named after Hermann Grassmann, and the names of the product come from the "wedge" symbol and the fact that the product of two elements of is "outside"
In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch.
In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.
In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices such that
In control theory, backstepping is a technique developed circa 1990 by Petar V. Kokotovic, and others for designing stabilizing controls for a special class of nonlinear dynamical systems. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this recursive structure, the designer can start the design process at the known-stable system and "back out" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as backstepping.
A variable structure system, or VSS, is a discontinuous nonlinear system of the form
In statistics, machine learning and algorithms, a tensor sketch is a type of dimensionality reduction that is particularly efficient when applied to vectors that have tensor structure. Such a sketch can be used to speed up explicit kernel methods, bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms.