# System identification

Last updated
Black box systems
System
Black box  · Oracle machine
Methods and techniques
Black-box testing  · Blackboxing
Related techniques
Feed forward  · Obfuscation  · Pattern recognition  · White box  · White-box testing  · System identification
Fundamentals
A priori information  · Control systems  · Open systems  · Operations research  · Thermodynamic systems

The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. [1] System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.

## Overview

A dynamical mathematical model in this context is a mathematical description of the dynamic behavior of a system or process in either the time or frequency domain. Examples include:

One of the many possible applications of system identification is in control systems. For example, it is the basis for modern data-driven control systems, in which concepts of system identification are integrated into the controller design, and lay the foundations for formal controller optimality proofs.

### Input-output vs output-only

System identification techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can include only the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.

### Optimal design of experiments

The quality of system identification depends on the quality of the inputs, which are under the control of the systems engineer. Therefore, systems engineers have long used the principles of the design of experiments. [2] In recent decades, engineers have increasingly used the theory of optimal experimental design to specify inputs that yield maximally precise estimators. [3] [4]

## White- and black-box

One could build a so-called white-box model based on first principles, e.g. a model for a physical process from the Newton equations, but in many cases such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes.

A much more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification:

• grey box model: although the peculiarities of what is going on inside the system are not entirely known, a certain model based on both insight into the system and experimental data is constructed. This model does however still have a number of unknown free parameters which can be estimated using system identification. [5] [6] One example [7] uses the Monod saturation model for microbial growth. The model contains a simple hyperbolic relationship between substrate concentration and growth rate, but this can be justified by molecules binding to a substrate without going into detail on the types of molecules or types of binding. Grey box modeling is also known as semi-physical modeling. [8]
• black box model: No prior model is available. Most system identification algorithms are of this type.

In the context of nonlinear system identification Jin et al. [9] describe greybox modeling by assuming a model structure a priori and then estimating the model parameters. Parameter estimation is relatively easy if the model form is known but this is rarely the case. Alternatively the structure or model terms for both linear and highly complex nonlinear models can be identified using NARMAX methods. [10] This approach is completely flexible and can be used with grey box models where the algorithms are primed with the known terms, or with completely black box models where the model terms are selected as part of the identification procedure. Another advantage of this approach is that the algorithms will just select linear terms if the system under study is linear, and nonlinear terms if the system is nonlinear, which allows a great deal of flexibility in the identification.

## Identification for control

In control systems applications, the objective of engineers is to obtain a good performance of the closed-loop system, which is the one comprising the physical system, the feedback loop and the controller. This performance is typically achieved by designing the control law relying on a model of the system, which needs to be identified starting from experimental data. If the model identification procedure is aimed at control purposes, what really matters is not to obtain the best possible model that fits the data, as in the classical system identification approach, but to obtain a model satisfying enough for the closed-loop performance. This more recent approach is called identification for control, or I4C in short.

The idea behind I4C can be better understood by considering the following simple example. [11] Consider a system with true transfer function ${\displaystyle G_{0}(s)}$:

${\displaystyle G_{0}(s)={\frac {1}{s+1}}}$

and an identified model ${\displaystyle {\hat {G}}(s)}$:

${\displaystyle {\hat {G}}(s)={\frac {1}{s}}.}$

From a classical system identification perspective, ${\displaystyle {\hat {G}}(s)}$ is not, in general, a good model for ${\displaystyle G_{0}(s)}$. In fact, modulus and phase of ${\displaystyle {\hat {G}}(s)}$ are different from those of ${\displaystyle G_{0}(s)}$ at low frequency. What is more, while ${\displaystyle G_{0}(s)}$ is an asymptotically stable system, ${\displaystyle {\hat {G}}(s)}$ is a simply stable system. However, ${\displaystyle {\hat {G}}(s)}$ may still be a model good enough for control purposes. In fact, if one wants to apply a purely proportional negative feedback controller with high gain ${\displaystyle K}$, the closed-loop transfer function from the reference to the output is, for ${\displaystyle G_{0}(s)}$

${\displaystyle {\frac {KG_{0}(s)}{1+KG_{0}(s)}}={\frac {K}{s+1+K}}}$

and for ${\displaystyle {\hat {G}}(s)}$

${\displaystyle {\frac {K{\hat {G}}(s)}{1+K{\hat {G}}(s)}}={\frac {K}{s+K}}.}$

Since ${\displaystyle K}$ is very large, one has that ${\displaystyle 1+K\approx K}$. Thus, the two closed-loop transfer functions are indistinguishable. In conclusion, ${\displaystyle {\hat {G}}(s)}$ is a perfectly acceptable identified model for the true system if such feedback control law has to be applied. Whether or not a model is appropriate for control design depends not only on the plant/model mismatch, but also on the controller that will be implemented. As such, in the I4C framework, given a control performance objective, the control engineer has to design the identification phase in such a way that the performance achieved by the model-based controller on the true system is as high as possible.

Sometimes, it is even convenient to design a controller without explicitly identifying a model of the system, but directly working on experimental data. This is the case of direct data-driven control systems.

## Forward model

A common understanding in Artificial Intelligence is that the controller has to generate the next move for a robot. For example, the robot starts in the maze and then the robots decides to move forward. Model predictive control determines the next action indirectly. The term “model” is referencing to a forward model which doesn't provide the correct action but simulates a scenario. [12] A forward model is equal to a physics engine used in game programming. The model takes an input and calculates the future state of the system.

The reason why dedicated forward models are constructed, is because it allows one to divide the overall control process. The first question is how to predict future states of the system. That means, to simulate a plant over a timespan for different input values. And the second task is to search for a sequence of input values which brings the plant into a goal state. This is called predictive control.

The forward model is the most important aspect of a MPC-controller. It has to be created before the solver can be realized. If it's unclear what the behavior of a system is, it's not possible to search for meaningful actions. The workflow for creating a forward model is called system identification. The idea is to formalize a system in a set of equations which will behave like the original system. [13] The error between the real system and the forward model can be measured.

There are many techniques available to create a forward model: ordinary differential equations is the classical one which is used in physics engines like Box2d. A more recent technique is a neural network for creating the forward model. [14]

## Related Research Articles

Control theory deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences and engineering disciplines, as well as in non-physical systems such as the social sciences. The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and philosophy.

A proportional–integral–derivative controller is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an error value as the difference between a desired setpoint (SP) and a measured process variable (PV) and applies a correction based on proportional, integral, and derivative terms, hence the name.

In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization algorithm. Because of the complexity of the optimization algorithms, almost all adaptive filters are digital filters. Adaptive filters are required for some applications because some parameters of the desired processing operation are not known in advance or are changing. The closed loop adaptive filter uses feedback in the form of an error signal to refine its transfer function.

For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

In computer animation and robotics, inverse kinematics is the mathematical process of calculating the variable joint parameters needed to place the end of a kinematic chain, such as a robot manipulator or animation character's skeleton, in a given position and orientation relative to the start of the chain. Given joint parameters, the position and orientation of the chain's end, e.g. the hand of the character or robot, can typically be calculated directly using multiple applications of trigonometric formulas, a process known as forward kinematics. However, the reverse operation is, in general, much more challenging.

Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered:

Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.

The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory.

Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different spiking neuron models and links, whenever possible to experimental phenomena. It includes deterministic and probabilistic models.

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.

The Smith predictor is a type of predictive controller designed to control systems with a significant feedback time delay. The idea can be illustrated as follows.

The Eigensystem realization algorithm (ERA) is a system identification technique popular in civil engineering, in particular in structural health monitoring. ERA can be used as a modal analysis technique and generates a system realization using the time domain response (multi-)input and (multi-)output data. The ERA was proposed by Juang and Pappa and has been used for system identification of aerospace structures such as the Galileo spacecraft, turbines, civil structures and many other type of systems.

System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.

Classical control theory is a branch of control theory that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback, using the Laplace transform as a basic tool to model such systems.

Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant.

## References

1. Torsten, Söderström; Stoica, P. (1989). System identification. New York: Prentice Hall. ISBN   978-0138812362. OCLC   16983523.
2. Spall, J. C. (2010), “Factorial Design for Efficient Experimentation: Generating Informative Data for System Identification,” IEEE Control Systems Magazine, vol. 30(5), pp. 38–53. https://doi.org/10.1109/MCS.2010.937677
3. Goodwin, Graham C. & Payne, Robert L. (1977). Dynamic System Identification: Experiment Design and Data Analysis. Academic Press. ISBN   978-0-12-289750-4.
4. Walter, Éric & Pronzato, Luc (1997). Identification of Parametric Models from Experimental Data. Springer.
5. Nielsen, Henrik Aalborg; Madsen, Henrik (December 2000). "Predicting the Heat Consumption in District Heating Systems using Meteorological Forecasts" (PDF). Lyngby: Department of Mathematical Modelling, Technical University of Denmark. S2CID   134091581. Archived from the original (PDF) on 2017-04-21.Cite journal requires |journal= (help)
6. Nielsen, Henrik Aalborg; Madsen, Henrik (January 2006). "Modelling the heat consumption in district heating systems using a grey-box approach". Energy and Buildings. 38 (1): 63–71. doi:10.1016/j.enbuild.2005.05.002. ISSN   0378-7788.
7. Wimpenny, J.W.T. (April 1997). "The Validity of Models". Advances in Dental Research. 11 (1): 150–159. doi:10.1177/08959374970110010601. ISSN   0895-9374. PMID   9524451. S2CID   23008333.
8. Forssell, U.; Lindskog, P. (July 1997). "Combining Semi-Physical and Neural Network Modeling: An Example of Its Usefulness". IFAC Proceedings Volumes. 30 (11): 767–770. doi:10.1016/s1474-6670(17)42938-7. ISSN   1474-6670.
9. Gang Jin; Sain, M.K.; Pham, K.D.; Billie, F.S.; Ramallo, J.C. (2001). Modeling MR-dampers: a nonlinear blackbox approach. Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148). IEEE. doi:10.1109/acc.2001.945582. ISBN   978-0780364950. S2CID   62730770.
10. Billings, Stephen A (2013-07-23). Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio–Temporal Domains. doi:10.1002/9781118535561. ISBN   9781118535561.
11. Gevers, Michel (January 2005). "Identification for Control: From the Early Achievements to the Revival of Experiment Design*". European Journal of Control. 11 (4–5): 335–352. doi:10.3166/ejc.11.335-352. ISSN   0947-3580. S2CID   13054338.
12. Nguyen-Tuong, Duy and Peters, Jan (2011). "Model learning for robot control: a survey". Cognitive Processing. Springer. 12 (4): 319–340. doi:10.1007/s10339-011-0404-1. PMID   21487784. S2CID   8660085.CS1 maint: multiple names: authors list (link)
13. Kopicki, Marek and Zurek, Sebastian and Stolkin, Rustam and Moerwald, Thomas and Wyatt, Jeremy L (2017). "Learning modular and transferable forward models of the motions of push manipulated objects". Autonomous Robots. Springer. 41 (5): 1061–1082. doi:.CS1 maint: multiple names: authors list (link)
14. Eric Wan and Antonio Baptista and Magnus Carlsson and Richard Kiebutz and Yinglong Zhang and Alexander Bogdanov (2001). Model predictive neural control of a high-fidelity helicopter model. {AIAA. American Institute of Aeronautics and Astronautics. doi:10.2514/6.2001-4164.
• Goodwin, Graham C. & Payne, Robert L. (1977). Dynamic System Identification: Experiment Design and Data Analysis. Academic Press.
• Daniel Graupe: Identification of Systems, Van Nostrand Reinhold, New York, 1972 (2nd ed., Krieger Publ. Co., Malabar, FL, 1976)
• Eykhoff, Pieter: System Identification – Parameter and System Estimation, John Wiley & Sons, New York, 1974. ISBN   0-471-24980-7
• Lennart Ljung: System Identification — Theory For the User, 2nd ed, PTR Prentice Hall, Upper Saddle River, N.J., 1999.
• Jer-Nan Juang: Applied System Identification, Prentice Hall, Upper Saddle River, N.J., 1994.
• Kushner, Harold J. and Yin, G. George (2003). Stochastic Approximation and Recursive Algorithms and Applications (Second ed.). Springer.CS1 maint: multiple names: authors list (link)
• Oliver Nelles: Nonlinear System Identification, Springer, 2001. ISBN   3-540-67369-5
• T. Söderström, P. Stoica, System Identification, Prentice Hall, Upper Saddle River, N.J., 1989. ISBN   0-13-881236-5
• R. Pintelon, J. Schoukens, System Identification: A Frequency Domain Approach, 2nd Edition, IEEE Press, Wiley, New York, 2012. ISBN   978-0-470-64037-1
• Spall, J. C. (2003), Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley, Hoboken, NJ.
• Walter, Éric & Pronzato, Luc (1997). Identification of Parametric Models from Experimental Data. Springer.