Adaptive control

Last updated

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. [1] [2] For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.

Contents

Parameter estimation

The foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.

Classification of adaptive control techniques

In general, one should distinguish between:

  1. Feedforward adaptive control
  2. Feedback adaptive control

as well as between

  1. Direct methods
  2. Indirect methods
  3. Hybrid methods

Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters. [3] Hybrid methods rely on both estimation of parameters and direct modification of the control law.

MRAC MRAC.svg
MRAC
MIAC MIAC.svg
MIAC

There are several broad categories of feedback adaptive control (classification can vary):

Adaptive control with Multiple Models AdaptiveControl.png
Adaptive control with Multiple Models

Some special topics in adaptive control can be introduced as well:

  1. Adaptive control based on discrete-time process identification
  2. Adaptive control based on the model reference control technique [5]
  3. Adaptive control based on continuous-time process models
  4. Adaptive control of multivariable processes [6]
  5. Adaptive control of nonlinear processes
  6. Concurrent learning adaptive control, which relaxes the condition on persistent excitation for parameter convergence for a class of systems [7] [8]

In recent times, adaptive control has been merged with intelligent techniques such as fuzzy and neural networks to bring forth new concepts such as fuzzy adaptive control.

Applications

When designing adaptive control systems, special consideration is necessary of convergence and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show .

Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.

A particularly successful application of adaptive control has been adaptive flight control. [9] [10] This body of work has focused on guaranteeing stability of a model reference adaptive control scheme using Lyapunov arguments. Several successful flight-test demonstrations have been conducted, including fault tolerant adaptive control. [11]

See also

Related Research Articles

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

<span class="mw-page-title-main">Lyapunov stability</span> Property of a dynamical system where solutions near an equilibrium point remain so

Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.

The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz, is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.

In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.

In control theory, advanced process control (APC) refers to a broad range of techniques and technologies implemented within industrial process control systems. Advanced process controls are usually deployed optionally and in addition to basic process controls. Basic process controls are designed and built with the process itself, to facilitate basic operation, control and automation requirements. Advanced process controls are typically added subsequently, often over the course of many years, to address particular performance or economic improvement opportunities in the process.

Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.

H-infinity loop-shaping is a design methodology in modern control theory. It combines the traditional intuition of classical control methods, such as Bode's sensitivity integral, with H-infinity optimization techniques to achieve controllers whose stability and performance properties hold despite bounded differences between the nominal plant assumed in design and the true plant encountered in practice. Essentially, the control system designer describes the desired responsiveness and noise-suppression properties by weighting the plant transfer function in the frequency domain; the resulting 'loop-shape' is then 'robustified' through optimization. Robustification usually has little effect at high and low frequencies, but the response around unity-gain crossover is adjusted to maximise the system's stability margins. H-infinity loop-shaping can be applied to multiple-input multiple-output (MIMO) systems.

In recent years, the use of biologically inspired methods such as the evolutionary algorithm have been increasingly employed to solve and analyze complex computational problems. BELBIC is one such controller which is proposed by Caro Lucas, Danial Shahmirzadi and Nima Sheikholeslami and adopts the network model developed by Moren and Balkenius to mimic those parts of the brain which are known to produce emotion.

Variable structure control (VSC) is a form of discontinuous nonlinear control. The method alters the dynamics of a nonlinear system by application of a high-frequency switching control. The state-feedback control law is not a continuous function of time; it switches from one smooth condition to another. So the structure of the control law varies based on the position of the state trajectory; the method switches from one smooth control law to another and possibly very fast speeds. VSC and associated sliding mode behaviour was first investigated in early 1950s in the Soviet Union by Emelyanov and several coresearchers.

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

The following outline is provided as an overview of and topical guide to control engineering:

<span class="mw-page-title-main">Wassim Michael Haddad</span>

Wassim Michael Haddad is a Lebanese-Greek-American applied mathematician, scientist, and engineer, with research specialization in the areas of dynamical systems and control. His research has led to fundamental breakthroughs in applied mathematics, thermodynamics, stability theory, robust control, dynamical system theory, and neuroscience. Professor Haddad is a member of the faculty of the School of Aerospace Engineering at Georgia Institute of Technology, where he holds the rank of Professor and Chair of the Flight Mechanics and Control Discipline. Dr. Haddad is a member of the Academy of Nonlinear Sciences for recognition of paramount contributions to the fields of nonlinear stability theory, nonlinear dynamical systems, and nonlinear control and an IEEE Fellow for contributions to robust, nonlinear, and hybrid control systems.

Linear parameter-varying control deals with the control of linear parameter-varying systems, a class of nonlinear systems which can be modelled as parametrized linear systems whose parameters change with their state.

System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.

Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory which solves optimal control problems with methods of machine learning. Key applications are complex nonlinear systems for which linear control theory methods are not applicable.

Petros A. Ioannou is a Cypriot American Electrical Engineer who made important contributions in Robust Adaptive Control, Vehicle and Traffic Flow Control, and Intelligent Transportation Systems.

Frank L. Lewis is an American electrical engineer, academic and researcher. He is a professor of electrical engineering, Moncrief-O’Donnell Endowed Chair, and head of Advanced Controls and Sensors Group at The University of Texas at Arlington (UTA). He is a member of UTA Academy of Distinguished Teachers and a charter member of UTA Academy of Distinguished Scholars.

References

  1. Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with Reinforcement Learning". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93. doi: 10.1146/annurev-control-062922-090153 . ISSN   2573-5144 . Retrieved 4 May 2023.
  2. Chengyu Cao, Lili Ma, Yunjun Xu (2012). ""Adaptive Control Theory and Applications", Journal of Control Science and Engineering'". 2012 (1): 1, 2. doi: 10.1155/2012/827353 .{{cite journal}}: Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link)
  3. Astrom, Karl (2008). adaptive control. Dover. pp. 25–26.
  4. Narendra, Kumpati S.; Han, Zhuo (August 2011). "adaptive control Using Collective Information Obtained from Multiple Models". IFAC Proceedings Volumes. 18 (1): 362–367. doi: 10.3182/20110828-6-IT-1002.02237 .
  5. Lavretsky, Eugene; Wise, Kevin (2013). Robust adaptive control . Springer London. pp.  317–353. ISBN   9781447143963.
  6. Tao, Gang (2014). "Multivariable adaptive control: A survey". Automatica. 50 (11): 2737–2764. doi:10.1016/j.automatica.2014.10.015.
  7. Chowdhary, Girish; Johnson, Eric (2011). "Theory and flight-test validation of a concurrent learning adaptive controller". Journal of Guidance, Control, and Dynamics. 34 (2): 592–607. Bibcode:2011JGCD...34..592C. doi:10.2514/1.46866.
  8. Chowdhary, Girish; Muehlegg, Maximillian; Johnson, Eric (2014). "Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation". International Journal of Control. 87 (8): 1583–1603. Bibcode:2011JGCD...34..592C. doi:10.2514/1.46866.
  9. Lavretsky, Eugene (2015). "Robust and Adaptive Control Methods for Aerial Vehicles". Handbook of Unmanned Aerial Vehicles. pp. 675–710. doi:10.1007/978-90-481-9707-1_50. ISBN   978-90-481-9706-4.
  10. Kannan, Suresh K.; Chowdhary, Girish Vinayak; Johnson, Eric N. (2015). "Adaptive Control of Unmanned Aerial Vehicles: Theory and Flight Tests". Handbook of Unmanned Aerial Vehicles. pp. 613–673. doi:10.1007/978-90-481-9707-1_61. ISBN   978-90-481-9706-4.
  11. Chowdhary, Girish; Johnson, Eric N; Chandramohan, Rajeev; Kimbrell, Scott M; Calise, Anthony (2013). "Guidance and control of airplanes under actuator failures and severe structural damage". Journal of Guidance, Control, and Dynamics. 36 (4): 1093–1104. Bibcode:2013JGCD...36.1093C. doi:10.2514/1.58028.

Further reading