Linear parameter-varying control

Last updated

Linear parameter-varying control (LPV control) deals with the control of linear parameter-varying systems, a class of nonlinear systems which can be modelled as parametrized linear systems whose parameters change with their state.

Contents

Gain scheduling

In designing feedback controllers for dynamical systems a variety of modern, multivariable controllers are used. In general, these controllers are often designed at various operating points using linearized models of the system dynamics and are scheduled as a function of a parameter or parameters for operation at intermediate conditions. It is an approach for the control of non-linear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system. One or more observable variables, called the scheduling variables, are used to determine the current operating region of the system and to enable the appropriate linear controller. For example, in case of aircraft control, a set of controllers are designed at different gridded locations of corresponding parameters such as AoA, Mach, dynamic pressure, CG etc. In brief, gain scheduling is a control design approach that constructs a nonlinear controller for a nonlinear plant by patching together a collection of linear controllers. These linear controllers are blended in real-time via switching or interpolation.

Scheduling multivariable controllers can be a very tedious and time-consuming task. A new paradigm is the linear parameter-varying (LPV) techniques which synthesize of automatically scheduled multivariable controller.

Drawbacks of classical gain scheduling

Though the approach is simple and the computational burden of linearization scheduling approaches is often much less than for other nonlinear design approaches, its inherent drawbacks sometimes outweigh its advantages and necessitates a new paradigm for the control of dynamical systems. New methodologies such as Adaptive control based on Artificial Neural Networks (ANN), Fuzzy logic etc. try to address such problems, the lack of proof of stability and performance of such approaches over entire operating parameter regime requires design of a parameter dependent controller with guaranteed properties for which, a Linear Parameter Varying controller could be an ideal candidate.

Linear parameter-varying systems

LPV systems are a very special class of nonlinear systems which appears to be well suited for control of dynamical systems with parameter variations. In general, LPV techniques provide a systematic design procedure for gain-scheduled multivariable controllers. This methodology allows performance, robustness and bandwidth limitations to be incorporated into a unified framework. [2] [3] A brief introduction on the LPV systems and the explanation of terminologies are given below.

Parameter dependent systems

In control engineering, a state-space representation is a mathematical model of a physical system as a set of input, output, and state variables, related by first-order differential equations. The dynamic evolution of a nonlinear, non-autonomous system is represented by

If the system is time variant

The state variables describe the mathematical "state" of a dynamical system and in modeling large complex nonlinear systems if such state variables are chosen to be compact for the sake of practicality and simplicity, then parts of dynamic evolution of system are missing. The state space description will involve other variables called exogenous variables whose evolution is not understood or is too complicated to be modeled but affect the state variables evolution in a known manner and are measurable in real-time using sensors. When a large number of sensors are used, some of these sensors measure outputs in the system theoretic sense as known, explicit nonlinear functions of the modeled states and time, while other sensors are accurate estimates of the exogenous variables. Hence, the model will be a time varying, nonlinear system, with the future time variation unknown, but measured by the sensors in real-time. In this case, if denotes the exogenous variable vector, and denotes the modeled state, then the state equations are written as

The parameter is not known but its evolution is measured in real time and used for control. If the above equation of parameter dependent system is linear in time then it is called Linear Parameter Dependent systems. They are written similar to Linear Time Invariant form albeit the inclusion in time variant parameter.

Parameter-dependent systems are linear systems, whose state-space descriptions are known functions of time-varying parameters. The time variation of each of the parameters is not known in advance, but is assumed to be measurable in real time. The controller is restricted to be a linear system, whose state-space entries depend causally on the parameter’s history. There exist three different methodologies to design a LPV controller namely,

  1. Linear fractional transformations which relies on the small gain theorem for bounds on performance and robustness.
  2. Single Quadratic Lyapunov Function (SQLF)
  3. Parameter Dependent Quadratic Lyapunov Function (PDQLF) to bound the achievable level of performance.

These problems are solved by reformulating the control design into finite-dimensional, convex feasibility problems which can be solved exactly, and infinite-dimensional convex feasibility problems which can be solved approximately. This formulation constitutes a type of gain scheduling problem and contrast to classical gain scheduling, this approach address the effect of parameter variations with assured stability and performance.

Related Research Articles

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences and engineering disciplines, as well as in non-physical systems such as the social sciences. It can also be taught as a subject in its own right.

A proportional–integral–derivative controller is a feedback-based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value with the actual value of the system. The difference between these two values is called the error value, denoted as .

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

<span class="mw-page-title-main">System identification</span> Statistical methods to build mathematical models of dynamical systems from measured data

The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.

In control theory, advanced process control (APC) refers to a broad range of techniques and technologies implemented within industrial process control systems. Advanced process controls are usually deployed optionally and in addition to basic process controls. Basic process controls are designed and built with the process itself, to facilitate basic operation, control and automation requirements. Advanced process controls are typically added subsequently, often over the course of many years, to address particular performance or economic improvement opportunities in the process.

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

In control theory, gain scheduling is an approach to control of nonlinear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system.

Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

<span class="mw-page-title-main">Wassim Michael Haddad</span> Lebanese-Greek-American mathematician

Wassim Michael Haddad is a Lebanese-Greek-American applied mathematician, scientist, and engineer, with research specialization in the areas of dynamical systems and control. His research has led to fundamental breakthroughs in applied mathematics, thermodynamics, stability theory, robust control, dynamical system theory, and neuroscience. Professor Haddad is a member of the faculty of the School of Aerospace Engineering at Georgia Institute of Technology, where he holds the rank of Professor and Chair of the Flight Mechanics and Control Discipline. Dr. Haddad is a member of the Academy of Nonlinear SciencesArchived 2016-03-04 at the Wayback Machine for recognition of paramount contributions to the fields of nonlinear stability theory, nonlinear dynamical systems, and nonlinear control and an IEEE Fellow for contributions to robust, nonlinear, and hybrid control systems.

Baranyi and Yam proposed the TP model transformation as a new concept in quasi-LPV (qLPV) based control, which plays a central role in the highly desirable bridging between identification and polytopic systems theories. It is also used as a TS (Takagi-Sugeno) fuzzy model transformation. It is uniquely effective in manipulating the convex hull of polytopic forms, and, hence, has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern linear matrix inequality based control theory. Thus, although it is a transformation in a mathematical sense, it has established a conceptually new direction in control theory and has laid the ground for further new approaches towards optimality.

System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.

Classical control theory is a branch of control theory that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback, using the Laplace transform as a basic tool to model such systems.

The GEKKO Python package solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers. Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.

Chance Constrained Programming (CCP) is a mathematical optimization approach used to handle problems under uncertainty. It was first introduced by Charnes and Cooper in 1959 and further developed by Miller and Wagner in 1965. CCP is widely used in various fields, including finance, engineering, and operations research, to optimize decision-making processes where certain constraints need to be satisfied with a specified probability.

References

  1. S. Shamma, Jeff (1992). "Gain Scheduling: Potentital Hazards and Possible Remedies". IEEE Control Systems Magazine. June (3).
  2. J. Balas, Gary (2002). "Linear Parameter-Varying Control And Its Application to Aerospace Systems" (PDF). ICAS. Retrieved 2013-01-29.
  3. Wu, Fen (1995). "Control of Linear Parameter Varying systems". Univ. of California, Berkeley. Archived from the original on 2014-01-03. Retrieved 2024-12-16.

Further reading