In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.
Knowing the system state is necessary to solve many control theory problems; for example, stabilizing a system using state feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system is observable, it is possible to fully reconstruct the system state from its output measurements using the state observer.
Linear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections.
The state of a linear, time-invariant discrete-time system is assumed to satisfy
where, at time , is the plant's state; is its inputs; and is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms of discrete time steps, very similar equations hold for continuous systems). If this system is observable then the output of the plant, , can be used to steer the state of the state observer.
The observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrix ; this is then added to the equations for the state of the observer to produce a so-called Luenberger observer, defined by the equations below. Note that the variables of a state observer are commonly denoted by a "hat": and to distinguish them from the variables of the equations satisfied by the physical system.
The observer is called asymptotically stable if the observer error converges to zero when . For a Luenberger observer, the observer error satisfies . The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrix has all the eigenvalues inside the unit circle.
For control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrix .
The observer equations then become:
or, more simply,
Due to the separation principle we know that we can choose and independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observer are usually chosen to converge 10 times faster than the poles of the system .
The previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gains are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., when is a Hurwitz matrix).
For a continuous-time linear system
where , the observer looks similar to discrete-time case described above:
The observer error satisfies the equation
The eigenvalues of the matrix can be chosen arbitrarily by appropriate choice of the observer gain when the pair is observable, i.e. observability condition holds. In particular, it can be made Hurwitz, so the observer error when .
When the observer gain is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use). [1] As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example, sliding mode control can be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to a Kalman filter. [2] [3] Another approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable. [4]
High gain, sliding mode and extended observers are the most common observers for nonlinear systems. To illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system:
where . Also assume that there is a measurable output given by
There are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is,
One suggestion by Krener and Isidori [5] and Krener and Respondek [6] can be applied in a situation when there exists a linearizing transformation (i.e., a diffeomorphism, like the one used in feedback linearization) such that in new variables the system equations read
The Luenberger observer is then designed as
The observer error for the transformed variable satisfies the same equation as in classical linear case.
As shown by Gauthier, Hammouri, and Othman [7] and Hammouri and Kinnaert, [8] if there exists transformation such that the system can be transformed into the form
then the observer is designed as
where is a time-varying observer gain.
Ciccarella, Dalla Mora, and Germani [9] obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity.
As discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer, [10] fixed time observer, [11] switched high gain observer [12] and uniting observer. [13] The sliding mode observer uses non-linear high-gain feedback to drive estimated states to a hypersurface where there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like the signum (i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectories slide along a curve where the estimated output matches the measured output exactly. So, if the system is observable from its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to the Kalman filter but with simpler implementation. [2] [3]
As suggested by Drakunov, [14] a sliding mode observer can also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimate and has the form
where:
The idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the function should be replaced by equivalent values (see equivalent control in the theory of sliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time.
The modified observation error can be written in the transformed states . In particular,
and so
So:
So, for sufficiently large gains, all observer estimated states reach the actual states in finite time. In fact, increasing allows for convergence in any desired finite time so long as each function can be bounded with certainty. Hence, the requirement that the map is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition.
In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that
does not depend on time. The observer is then
Multi-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation. [4] The idea of multiple models was previously applied to obtain information in adaptive control. [15]
Assuming that the number of high-gain observers equals ,
where is the observer index. The first layer observers consists of the same gain but they differ with the initial state . In the second layer all from observers are combined into one to obtain single state vector estimation
where are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process.
Let assume that
and
where is some vector that depends on observer error .
Some transformation yields to linear regression problem
This formula gives possibility to estimate . To construct manifold we need mapping between and ensurance that is calculable relying on measurable signals. First thing is to eliminate parking phenomenon for from observer error
Calculate times derivative on to find mapping m lead to defined as
where is some time constant. Note that relays on both and its integrals hence it is easily available in the control system. Further is specified by estimation law; and thus it proves that manifold is measurable. In the second layer for is introduced as estimates of coefficients. The mapping error is specified as
where . If coefficients are equal to , then mapping error Now it is possible to calculate from above equation and hence the peaking phenomenon is reduced thanks to properties of manifold. The created mapping gives a lot of flexibility in the estimation process. Even it is possible to estimate the value of in the second layer and to calculate the state . [4]
Bounding [16] or interval observers [17] [18] constitute a class of observers that provide two estimations of the state simultaneously: one of the estimations provides an upper bound on the real value of the state, whereas the second one provides a lower bound. The real value of the state is then known to be always within these two estimations.
These bounds are very important in practical applications, [19] [20] as they make possible to know at each time the precision of the estimation.
Mathematically, two Luenberger observers can be used, if is properly selected, using, for example, positive systems properties: [21] one for the upper bound (that ensures that converges to zero from above when , in the absence of noise and uncertainty), and a lower bound (that ensures that converges to zero from below). That is, always
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.
In statistics and control theory, Kalman filtering is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a joint probability distribution over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In mathematics, the adjoint representation of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is , the Lie group of real n-by-n invertible matrices, then the adjoint representation is the group homomorphism that sends an invertible n-by-n matrix to an endomorphism of the vector space of all linear transformations of defined by: .
In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.
In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics.
Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In control theory, the observability and controllability of a linear system are mathematical duals.
In functional analysis, the Friedrichs extension is a canonical self-adjoint extension of a non-negative densely defined symmetric operator. It is named after the mathematician Kurt Friedrichs. This extension is particularly useful in situations where an operator may fail to be essentially self-adjoint or whose essential self-adjointness is difficult to show.
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
Prony analysis was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.
Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
In mathematics, the classical groups are defined as the special linear groups over the reals , the complex numbers and the quaternions together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.