Moving horizon estimation

Last updated

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution. [1]

Contents

MHE reduces to the Kalman filter under certain simplifying conditions. [2] A critical evaluation of the extended Kalman filter and the MHE found that the MHE improved performance at the cost of increased computational expense. [3] Because of the computational expense, MHE has generally been applied to systems where there are greater computational resources and moderate to slow system dynamics. However, in the literature there are some methods to accelerate this method. [4] [5]

Overview

The application of MHE is generally to estimate measured or unmeasured states of dynamical systems. Initial conditions and parameters within a model are adjusted by MHE to align measured and predicted values. MHE is based on a finite horizon optimization of a process model and measurements. At time t the current process state is sampled and a minimizing strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the past: . Specifically, an online or on-the-fly calculation is used to explore state trajectories that find (via the solution of Euler–Lagrange equations) an objective-minimizing strategy until time . Only the last step of the estimation strategy is used, then the process state is sampled again and the calculations are repeated starting from the time-shifted states, yielding a new state path and predicted parameters. The estimation horizon keeps being shifted forward and for this reason the technique is called moving horizon estimation. Although this approach is not optimal, in practice it has given very good results when compared with the Kalman filter and other estimation strategies.

Principles of MHE

Moving horizon estimation (MHE) is a multivariable estimation algorithm that uses:

to calculate the optimum states and parameters.

Moving horizon estimation scheme Moving horizon estimation scheme.png
Moving horizon estimation scheme

The optimization estimation function is given by:

without violating state or parameter constraints (low/high limits)

With:

= i -th model predicted variable (e.g. predicted temperature)

= i -th measured variable (e.g. measured temperature)

= i -th estimated parameter (e.g. heat transfer coefficient)

= weighting coefficient reflecting the relative importance of measured values

= weighting coefficient reflecting the relative importance of prior model predictions

= weighting coefficient penalizing relative big changes in

Moving horizon estimation uses a sliding time window. At each sampling time the window moves one step forward. It estimates the states in the window by analyzing the measured output sequence and uses the last estimated state out of the window, as the prior knowledge.

Applications

See also

Related Research Articles

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.

<span class="mw-page-title-main">Kalman filter</span> Algorithm that estimates unknowns from a series of measurements over time

For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.

<span class="mw-page-title-main">Time series</span> Sequence of data points over time

In mathematics, a time series is a series of data points indexed in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.

In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression (AR) and the second for the moving average (MA). The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

<span class="mw-page-title-main">Coefficient of determination</span> Indicator for how well data points fit a line or curve

In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).

<span class="mw-page-title-main">Local regression</span> Moving average and polynomial regression method for smoothing data

Local regression or local polynomial regression, also known as moving regression, is a generalization of the moving average and polynomial regression. Its most common methods, initially developed for scatterplot smoothing, are LOESS and LOWESS, both pronounced. They are two strongly related non-parametric regression methods that combine multiple regression models in a k-nearest-neighbor-based meta-model. In some fields, LOESS is known and commonly referred to as Savitzky–Golay filter.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values predicted by a model or an estimator and the values observed. The RMSD represents the square root of the second sample moment of the differences between predicted values and observed values or the quadratic mean of these differences. These deviations are called residuals when the calculations are performed over the data sample that was used for estimation and are called errors when computed out-of-sample. The RMSD serves to aggregate the magnitudes of the errors in predictions for various data points into a single measure of predictive power. RMSD is a measure of accuracy, to compare forecasting errors of different models for a particular dataset and not between datasets, as it is scale-dependent.

In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations. While originally motivated by problems in engineering, filtering found applications in many fields from signal processing to finance.

Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.

An online model is a mathematical model which tracks and mirrors a plant or process in real-time, and which is implemented with some form of automatic adaptivity to compensate for model degradation over time.

APOPT is a software package for solving large-scale optimization problems of any of these forms:

System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

The GEKKO Python package solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers. Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.

References

  1. J.D. Hedengren; R. Asgharzadeh Shishavan; K.M. Powell; T.F. Edgar (2014). "Nonlinear modeling, estimation and predictive control in APMonitor". Computers & Chemical Engineering. 70 (5): 133–148. doi:10.1016/j.compchemeng.2014.04.013. S2CID   5793446.
  2. Rao, C.V.; Rawlings, J.B.; Maynes, D.Q (2003). "Constrained State Estimation for Nonlinear Discrete-Time Systems: Stability and Moving Horizon Approximations". IEEE Transactions on Automatic Control. 48 (2): 246–258. CiteSeerX   10.1.1.131.1613 . doi:10.1109/tac.2002.808470.
  3. Haseltine, E.J.; Rawlings, J.B. (2005). "Critical Evaluation of Extended Kalman Filtering and Moving-Horizon Estimation". Ind. Eng. Chem. Res. 44 (8): 2451–2460. doi:10.1021/ie034308l.
  4. 1 2 Hashemian, N.; Armaou, A. (2015). "Fast Moving Horizon Estimation of nonlinear processes via Carleman linearization". 2015 American Control Conference (ACC). pp. 3379–3385. doi:10.1109/ACC.2015.7171854. ISBN   978-1-4799-8684-2. S2CID   13251259.
  5. Hashemian, N.; Armaou, A. (2016). "Simulation, model-reduction and state estimation of a two-component coagulation process". AIChE Journal. 62 (5): 1557–1567. doi:10.1002/aic.15146.
  6. Spivey, B.; Hedengren, J. D.; Edgar, T. F. (2010). "Constrained Nonlinear Estimation for Industrial Process Fouling". Industrial & Engineering Chemistry Research. 49 (17): 7824–7831. doi:10.1021/ie9018116.
  7. Hedengren, J.D. (2012). Kevin C. Furman; Jin-Hwa Song; Amr El-Bakry (eds.). Advanced Process Monitoring (PDF). Springer’s International Series in Operations Research and Management Science. Archived from the original (PDF) on 2016-03-04. Retrieved 2012-09-18.
  8. Ramlal, J. (2007). "Moving Horizon Estimation for an Industrial Gas Phase Polymerization Reactor" (PDF). IFAC Symposium on Nonlinear Control Systems Design (NOLCOS). Archived from the original (PDF) on 2009-09-20.
  9. Sun, L. (2013). "Optimal Trajectory Generation using Model Predictive Control for Aerially Towed Cable Systems" (PDF). Journal of Guidance, Control, and Dynamics. 37 (2): 525. Bibcode:2014JGCD...37..525S. doi:10.2514/1.60820.
  10. Sun, L. (2015). "Parameter Estimation for Towed Cable Systems Using Moving Horizon Estimation" (PDF). IEEE Transactions on Aerospace and Electronic Systems. 51 (2): 1432–1446. Bibcode:2015ITAES..51.1432S. CiteSeerX   10.1.1.700.2174 . doi:10.1109/TAES.2014.130642. S2CID   17039399.

Further reading