Model predictive control

Last updated

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models [1] and in power electronics. [2] Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry. [3]

Contents

Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC. [4]

Overview

3 state and 3 actuator multi-input multi-output MPC simulation MPC 3x3.gif
3 state and 3 actuator multi-input multi-output MPC simulation

The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.

MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.

MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.

While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.

When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.

An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers. [5]

Theory behind MPC

A discrete MPC scheme. MPC scheme basic.svg
A discrete MPC scheme.

MPC is based on iterative, finite-horizon optimization of a plant model. At time the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method. [6] [7]

Principles of MPC

Model predictive control is a multivariable control algorithm that uses:

An example of a quadratic cost function for optimization is given by:

without violating constraints (low/high limits) with

: th controlled variable (e.g. measured temperature)
: th reference variable (e.g. required temperature)
: th manipulated variable (e.g. control valve)
: weighting coefficient reflecting the relative importance of
: weighting coefficient penalizing relative big changes in

etc.

Nonlinear MPC

Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution. [8]

The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. [9] NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.. [10] Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function. [11]

While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, [12] to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). [13] As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time. [14]

Explicit MPC

Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. [15] This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. [16] Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. [17] A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.

Robust MPC

Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.

Commercially available MPC software

Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.

A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.

MPC vs. LQR

Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.

While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.

Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.

The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, [4] and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.

This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.

See also

Related Research Articles

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.

Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization.

The TOMLAB Optimization Environment is a modeling platform for solving applied optimization problems in MATLAB.

Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

The PROPT MATLAB Optimal Control Software is a new generation platform for solving applied optimal control and parameters estimation problems.

The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in modeling and decision-making. The technique has existed for decades as a heuristic approach and has more recently been given a systematic theoretical foundation.

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

Parametric programming is a type of mathematical optimization, where the optimization problem is solved as a function of one or multiple parameters. Developed in parallel to sensitivity analysis, its earliest mention can be found in a thesis from 1952. Since then, there have been considerable developments for the cases of multiple parameters, presence of integer variables as well as nonlinearities.

The GEKKO Python package solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers. Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.

In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.

References

  1. Arnold, Michèle; Andersson, Göran; "Model Predictive Control of energy storage including uncertain forecasts" https://www.pscc-central.org/uploads/tx_ethpublications/fp292.pdf
  2. Geyer, Tobias; Model predictive control of high power converters and industrial drives, Wiley, London, ISBN   978-1-119-01090-6, Nov. 2016.
  3. Vichik, Sergey; Borrelli, Francesco (2014). "Solving linear and quadratic programs with an analog circuit". Computers & Chemical Engineering. 70: 160–171. doi:10.1016/j.compchemeng.2014.01.011.
  4. 1 2 Wang, Liuping (2009). Model Predictive Control System Design and Implementation Using MATLAB®. Springer Science & Business Media. pp. xii.
  5. Al-Gherwi, Walid; Budman, Hector; Elkamel, Ali (3 July 2012). "A robust distributed model predictive control based on a dual-mode approach". Computers and Chemical Engineering. 50 (2013): 130–138. doi:10.1016/j.compchemeng.2012.11.002.
  6. Nikolaou, Michael; "Model predictive controllers: A critical synthesis of theory and industrial needs", Advances in Chemical Engineering, volume 26, Academic Press, 2001, pages 131-204
  7. Berberich, Julian; Kohler, Johannes; Muller, Matthias A.; Allgöwer, Frank (2022). "Linear Tracking MPC for Nonlinear Systems—Part I: The Model-Based Case". IEEE Transactions on Automatic Control. 67 (9): 4390–4405. arXiv: 2105.08560 . doi:10.1109/TAC.2022.3166872. ISSN   0018-9286. S2CID   234763155.
  8. An excellent overview of the state of the art (in 2008) is given in the proceedings of the two large international workshops on NMPC, by Zheng and Allgöwer (2000) and by Findeisen, Allgöwer, and Biegler (2006).
  9. Hedengren, John D.; Asgharzadeh Shishavan, Reza; Powell, Kody M.; Edgar, Thomas F. (2014). "Nonlinear modeling, estimation and predictive control in APMonitor". Computers & Chemical Engineering. 70 (5): 133–148. doi:10.1016/j.compchemeng.2014.04.013. S2CID   5793446.
  10. Ohtsuka, Toshiyuki (2004). "A continuation/GMRES method for fast computation of nonlinear receding horizon control". Automatica. 40 (4): 563–574. doi:10.1016/j.automatica.2003.11.005.
  11. Muraleedharan, Arun (2022). "Real-Time Implementation of Randomized Model Predictive Control for Autonomous Driving". IEEE Transactions on Intelligent Vehicles. 7 (1): 11–20. doi: 10.1109/TIV.2021.3062730 . S2CID   233804176.
  12. Knyazev, Andrew; Malyshev, Alexander (2016). "Sparse preconditioning for model predictive control". 2016 American Control Conference (ACC). pp. 4494–4499. arXiv: 1512.00375 . doi:10.1109/ACC.2016.7526060. ISBN   978-1-4673-8682-1. S2CID   2077492.
  13. García, Míriam R.; Vilas, Carlos; Santos, Lino O.; Alonso, Antonio A. (2012). "A Robust Multi-Model Predictive Controller for Distributed Parameter Systems" (PDF). Journal of Process Control. 22 (1): 60–71. doi:10.1016/j.jprocont.2011.10.008.
  14. Kamyar, Reza; Taheri, Ehsan (2014). "Aircraft Optimal Terrain/Threat-Based Trajectory Planning and Control". Journal of Guidance, Control, and Dynamics. 37 (2): 466–483. Bibcode:2014JGCD...37..466K. doi:10.2514/1.61339.
  15. Bemporad, Alberto; Morari, Manfred; Dua, Vivek; Pistikopoulos, Efstratios N. (2002). "The explicit linear quadratic regulator for constrained systems". Automatica. 38 (1): 3–20. doi:10.1016/s0005-1098(01)00174-1.
  16. Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano (2015). "Explicit model predictive control accuracy analysis". 2015 54th IEEE Conference on Decision and Control (CDC). pp. 2389–2394. arXiv: 1509.02840 . Bibcode:2015arXiv150902840K. doi:10.1109/CDC.2015.7402565. ISBN   978-1-4799-7886-1. S2CID   6850073.
  17. Klaučo, Martin; Kalúz, Martin; Kvasnica, Michal (2017). "Real-time implementation of an explicit MPC-based reference governor for control of a magnetic levitation system". Control Engineering Practice. 60: 99–105. doi:10.1016/j.conengprac.2017.01.001.
  18. Scokaert, Pierre O. M.; Mayne, David Q. (1998). "Min-max feedback model predictive control for constrained linear systems". IEEE Transactions on Automatic Control. 43 (8): 1136–1142. doi:10.1109/9.704989.
  19. Nevistić, Vesna; Morari, Manfred (1996-06-01). "Robustness of MPC-Based Schemes for Constrained Control of Nonlinear Systems". IFAC Proceedings Volumes. 29 (1): 5823–5828. doi:10.1016/S1474-6670(17)58612-7. ISSN   1474-6670.
  20. Richards, Arthur G.; How, Jonathan P. (2006). "Robust stable model predictive control with constraint tightening". Proceedings of the American Control Conference.
  21. Langson, Wilbur; Chryssochoos, Ioannis; Raković, Saša V.; Mayne, David Q. (2004). "Robust model predictive control using tubes". Automatica. 40 (1): 125–133. doi:10.1016/j.automatica.2003.08.009.
  22. Lucia, Sergio; Finkler, Tiago; Engell, Sebastian (2013). "Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty". Journal of Process Control. 23 (9): 1306–1319. doi:10.1016/j.jprocont.2013.08.008.
  23. Lucia, Sergio; Subramanian, Sankaranarayanan; Limon, Daniel; Engell, Sebastian (2020). "Stability properties of multi-stage nonlinear model predictive control". Systems & Control Letters. 143 (9): 104743. doi:10.1016/j.sysconle.2020.104743. S2CID   225341650.
  24. Subramanian, Sankaranarayanan; Lucia, Sergio; Paulen, Radoslav; Engell, Sebastian (2021). "Tube-enhanced multi-stage model predictive control for flexible robust control of constrained linear systems". International Journal of Robust and Nonlinear Control. 31 (9): 4458–4487. arXiv: 2012.14848 . doi:10.1002/rnc.5486. S2CID   234354708.
  25. Subramanian, Sankaranarayanan; Abdelsalam, Yehia; Lucia, Sergio; Engell, Sebastian (2022). "Robust Tube-Enhanced Multi-Stage NMPC With Stability Guarantees". IEEE Control Systems Letters. 6: 1112–1117. doi:10.1109/LCSYS.2021.3089502. S2CID   235799791.

Further reading