Robust control

Last updated

In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.

Contents

The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, [1] prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today.

In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but bounded. [2] [3]

Criteria for robustness

Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control.

The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge.

Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation.

High-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings. This idea was already well understood by Bode and Black in 1927.

The modern theory of robust control

The theory of robust control system began in the late 1970s and early 1980s and soon developed a number of techniques for dealing with bounded system uncertainty. [4] [5]

Probably the most important example of a robust control technique is H-infinity loop-shaping, which was developed by Duncan McFarlane and Keith Glover of Cambridge University; this method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system.

An emerging area of robust control from application point of view is sliding mode control (SMC), which is a variation of variable structure control (VSC). The robustness properties of SMC with respect to matched uncertainty as well as the simplicity in design attracted a variety of applications.

While robust control has been traditionally dealt with along deterministic approaches, in the last two decades this approach has been criticized on the basis that it is too rigid to describe real uncertainty, while it often also leads to over conservative solutions. Probabilistic robust control has been introduced as an alternative, see e.g. [6] that interprets robust control within the so-called scenario optimization theory.

Another example is loop transfer recovery (LQG/LTR), [7] which was developed to overcome the robustness problems of linear-quadratic-Gaussian control (LQG) control.

Other robust techniques includes quantitative feedback theory (QFT), passivity based control, Lyapunov based control, etc.

When system behavior varies considerably in normal operation, multiple control laws may have to be devised. Each distinct control law addresses a specific system behavior mode. An example is a computer hard disk drive. Separate robust control system modes are designed in order to address the rapid magnetic head traversal operation, known as the seek, a transitional settle operation as the magnetic head approaches its destination, and a track following mode during which the disk drive performs its data access operation.

One of the challenges is to design a control system that addresses these diverse system operating modes and enables smooth transition from one mode to the next as quickly as possible.

Such state machine-driven composite control system is an extension of the gain scheduling idea where the entire control strategy changes based upon changes in system behavior.

See also

Related Research Articles

<span class="mw-page-title-main">Control engineering</span> Engineering discipline that deals with control systems

Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering and mechanical engineering at many institutions around the world.

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

A proportional–integral–derivative controller is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an error value as the difference between a desired setpoint (SP) and a measured process variable (PV) and applies a correction based on proportional, integral, and derivative terms, hence the name.

<span class="mw-page-title-main">Control system</span> System that manages the behavior of other systems

A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process.

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.

An industrial process control or simply process control in continuous production processes is a discipline that uses industrial control systems and control theory to achieve a production level of consistency, economy and safety which could not be achieved purely by human manual control. It is implemented widely in industries such as automotive, mining, dredging, oil refining, pulp and paper manufacturing, chemical processing and power generating plants.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.

In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz, is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.

In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space, where a special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information, and as the criterion of good design. The performance of the closed loop system is monitored by the coefficient diagram.

Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.

H-infinity loop-shaping is a design methodology in modern control theory. It combines the traditional intuition of classical control methods, such as Bode's sensitivity integral, with H-infinity optimization techniques to achieve controllers whose stability and performance properties hold despite bounded differences between the nominal plant assumed in design and the true plant encountered in practice. Essentially, the control system designer describes the desired responsiveness and noise-suppression properties by weighting the plant transfer function in the frequency domain; the resulting 'loop-shape' is then 'robustified' through optimization. Robustification usually has little effect at high and low frequencies, but the response around unity-gain crossover is adjusted to maximise the system's stability margins. H-infinity loop-shaping can be applied to multiple-input multiple-output (MIMO) systems.

Variable structure control (VSC) is a form of discontinuous nonlinear control. The method alters the dynamics of a nonlinear system by application of a high-frequency switching control. The state-feedback control law is not a continuous function of time; it switches from one smooth condition to another. So the structure of the control law varies based on the position of the state trajectory; the method switches from one smooth control law to another and possibly very fast speeds. VSC and associated sliding mode behaviour was first investigated in early 1950s in the Soviet Union by Emelyanov and several coresearchers.

<span class="mw-page-title-main">Jakob Stoustrup</span> Danish engineer

Jakob Stoustrup is a Danish researcher employed at Aalborg University, where he serves as professor of control theory at the Department of Electronic Systems.

In control engineering, the sensitivity of a control system measures how variations in the plant parameters affects the closed-loop transfer function. Since the controller parameters are typically matched to the process characteristics and the process may change, it is important that the controller parameters are chosen in such a way that the closed loop system is not sensitive to variations in process dynamics. Moreover, the sensitivity function is also important to analyse how disturbances affects the system.

Active disturbance rejection control is a model-free control technique used for designing controllers for systems with unknown dynamics and external disturbances. This approach only necessitates an estimated representation of the system's behavior to design controllers that effectively counteract disturbances without causing any overshooting.

The following outline is provided as an overview of and topical guide to control engineering:

<span class="mw-page-title-main">Wassim Michael Haddad</span>

Wassim Michael Haddad is a Lebanese-Greek-American applied mathematician, scientist, and engineer, with research specialization in the areas of dynamical systems and control. His research has led to fundamental breakthroughs in applied mathematics, thermodynamics, stability theory, robust control, dynamical system theory, and neuroscience. Professor Haddad is a member of the faculty of the School of Aerospace Engineering at Georgia Institute of Technology, where he holds the rank of Professor and Chair of the Flight Mechanics and Control Discipline. Dr. Haddad is a member of the Academy of Nonlinear SciencesArchived 2016-03-04 at the Wayback Machine for recognition of paramount contributions to the fields of nonlinear stability theory, nonlinear dynamical systems, and nonlinear control and an IEEE Fellow for contributions to robust, nonlinear, and hybrid control systems.

Classical control theory is a branch of control theory that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback, using the Laplace transform as a basic tool to model such systems.

References

  1. M. Athans, Editorial on the LQG problem, IEEE Trans. Autom. Control 16 (1971), no. 6, 528.
  2. J. Ackermann (1993), Robuste Regelung (in German), Springer-Verlag (Section 1.5) In German; an English version is also available
  3. Manfred Morari : Homepage
  4. Safonov: editorial
  5. Kemin Zhou: Essentials of Robust Control
  6. G. Calafiore and M.C. Campi. "The scenario approach to robust control design," IEEE Transactions on Automatic Control, 51(5). 742–753, 2006.
  7. http://www.nt.ntnu.no/users/skoge/book.html Multivariable Feedback Control Analysis and Design (2nd Edition)

Further reading