Observability

Last updated

Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In control theory, the observability and controllability of a linear system are mathematical duals.

Contents

The concept of observability was introduced by the Hungarian-American engineer Rudolf E. Kálmán for linear dynamic systems. [1] [2] A dynamical system designed to estimate the state of a system from measurements of the outputs is called a state observer for that system, such as Kalman filters.

Definition

Consider a physical system modeled in state-space representation. A system is said to be observable if, for every possible evolution of state and control vectors, the current state can be estimated using only the information from outputs (physically, this generally corresponds to information obtained by sensors). In other words, one can determine the behavior of the entire system from the system's outputs. On the other hand, if the system is not observable, there are state trajectories that are not distinguishable by only measuring the outputs.

Linear time-invariant systems

For time-invariant linear systems in the state space representation, there are convenient tests to check whether a system is observable. Consider a SISO system with state variables (see state space for details about MIMO systems) given by

Observability matrix

If and only if the column rank of the observability matrix, defined as

is equal to , then the system is observable. The rationale for this test is that if columns are linearly independent, then each of the state variables is viewable through linear combinations of the output variables .

Observability index

The observability index of a linear time-invariant discrete system is the smallest natural number for which the following is satisfied: , where

Unobservable subspace

The unobservable subspace of the linear system is the kernel of the linear map given by [3]

where is the set of continuous functions from to . can also be written as [3]

Since the system is observable if and only if , the system is observable if and only if is the zero subspace.

The following properties for the unobservable subspace are valid: [3]

Detectability

A slightly weaker notion than observability is detectability. A system is detectable if all the unobservable states are stable. [4]

Detectability conditions are important in the context of sensor networks. [5] [6]

Linear time-varying systems

Consider the continuous linear time-variant system

Suppose that the matrices , and are given as well as inputs and outputs and for all then it is possible to determine to within an additive constant vector which lies in the null space of defined by

where is the state-transition matrix.

It is possible to determine a unique if is nonsingular. In fact, it is not possible to distinguish the initial state for from that of if is in the null space of .

Note that the matrix defined as above has the following properties:

[7]

Observability matrix generalization

The system is observable in if and only if there exists an interval in such that the matrix is nonsingular.

If are analytic, then the system is observable in the interval [,] if there exists and a positive integer k such that [8]

where and is defined recursively as

Example

Consider a system varying analytically in and matrices

Then , and since this matrix has rank = 3, the system is observable on every nontrivial interval of .

Nonlinear systems

Given the system , . Where the state vector, the input vector and the output vector. are to be smooth vector fields.

Define the observation space to be the space containing all repeated Lie derivatives, then the system is observable in if and only if , where

[9]

Early criteria for observability in nonlinear dynamic systems were discovered by Griffith and Kumar, [10] Kou, Elliot and Tarn, [11] and Singh. [12]

There also exist an observability criteria for nonlinear time-varying systems. [13]

Static systems and general topological spaces

Observability may also be characterized for steady state systems (systems typically defined in terms of algebraic equations and inequalities), or more generally, for sets in . [14] [15] Just as observability criteria are used to predict the behavior of Kalman filters or other observers in the dynamic system case, observability criteria for sets in are used to predict the behavior of data reconciliation and other static estimators. In the nonlinear case, observability can be characterized for individual variables, and also for local estimator behavior rather than just global behavior.

See also

Related Research Articles

<span class="mw-page-title-main">Affine transformation</span> Geometric transformation that preserves lines but not angles nor the origin

In Euclidean geometry, an affine transformation or affinity is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.

Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal for the theorem to apply, nor do they need to be independent and identically distributed.

<span class="mw-page-title-main">Kalman filter</span> Algorithm that estimates unknowns from a series of measurements over time

For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.

In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.

In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In quantum information theory, a quantum channel is a communication channel which can transmit quantum information, as well as classical information. An example of quantum information is the state of a qubit. An example of classical information is a text document transmitted over the Internet.

In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

<span class="mw-page-title-main">Feedback linearization</span> Approach used in controlling nonlinear systems

Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form

In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.

<span class="mw-page-title-main">Classical group</span>

In mathematics, the classical groups are defined as the special linear groups over the reals , the complex numbers and the quaternions together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.

The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.

In statistics, machine learning and algorithms, a tensor sketch is a type of dimensionality reduction that is particularly efficient when applied to vectors that have tensor structure. Such a sketch can be used to speed up explicit kernel methods, bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms.

References

  1. Kalman, R.E. (1960). "On the general theory of control systems". IFAC Proceedings Volumes. 1: 491–502. doi:10.1016/S1474-6670(17)70094-8.
  2. Kalman, R. E. (1963). "Mathematical Description of Linear Dynamical Systems". Journal of the Society for Industrial and Applied Mathematics, Series A: Control. 1 (2): 152–192. doi:10.1137/0301010.
  3. 1 2 3 Sontag, E.D., "Mathematical Control Theory", Texts in Applied Mathematics, 1998
  4. "Controllability and Observability" (PDF). Retrieved 2024-05-19.
  5. Li, W.; Wei, G.; Ho, D. W. C.; Ding, D. (November 2018). "A Weightedly Uniform Detectability for Sensor Networks". IEEE Transactions on Neural Networks and Learning Systems. 29 (11): 5790–5796. doi:10.1109/TNNLS.2018.2817244. PMID   29993845. S2CID   51615852.
  6. Li, W.; Wang, Z.; Ho, D. W. C.; Wei, G. (2019). "On Boundedness of Error Covariances for Kalman Consensus Filtering Problems". IEEE Transactions on Automatic Control. 65 (6): 2654–2661. doi:10.1109/TAC.2019.2942826. S2CID   204196474.
  7. Brockett, Roger W. (1970). Finite Dimensional Linear Systems. John Wiley & Sons. ISBN   978-0-471-10585-5.
  8. Eduardo D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems.
  9. Lecture notes for Nonlinear Systems Theory by prof. dr. D.Jeltsema, prof dr. J.M.A.Scherpen and prof dr. A.J.van der Schaft.
  10. Griffith, E. W.; Kumar, K. S. P. (1971). "On the observability of nonlinear systems: I". Journal of Mathematical Analysis and Applications. 35: 135–147. doi:10.1016/0022-247X(71)90241-1.
  11. Kou, Shauying R.; Elliott, David L.; Tarn, Tzyh Jong (1973). "Observability of nonlinear systems". Information and Control. 22: 89–99. doi: 10.1016/S0019-9958(73)90508-1 .
  12. Singh, Sahjendra N. (1975). "Observability in non-linear systems with immeasurable inputs". International Journal of Systems Science. 6 (8): 723–732. doi:10.1080/00207727508941856.
  13. Martinelli, Agostino (2022). "Extension of the Observability Rank Condition to Time-Varying Nonlinear Systems". IEEE Transactions on Automatic Control. 67 (9): 5002–5008. doi:10.1109/TAC.2022.3180771. ISSN   0018-9286. S2CID   251957578.
  14. Stanley, G. M.; Mah, R. S. H. (1981). "Observability and redundancy in process data estimation" (PDF). Chemical Engineering Science. 36 (2): 259–272. Bibcode:1981ChEnS..36..259S. doi:10.1016/0009-2509(81)85004-X.
  15. Stanley, G.M.; Mah, R.S.H. (1981). "Observability and redundancy classification in process networks" (PDF). Chemical Engineering Science. 36 (12): 1941–1954. doi:10.1016/0009-2509(81)80034-6.