Controllability

Last updated

Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

Contents

Controllability and observability are dual aspects of the same problem.

Roughly, the concept of controllability denotes the ability to move a system around in its entire configuration space using only certain admissible manipulations. The exact definition varies slightly within the framework or the type of models applied.

The following are examples of variations of controllability notions which have been introduced in the systems and control literature:

State controllability

The state of a deterministic system, which is the set of values of all the system's state variables (those variables characterized by dynamic equations), completely describes the system at any given time. In particular, no information on the past of a system is needed to help in predicting the future, if the states at the present time are known and all current and future values of the control variables (those whose values can be chosen) are known.

Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input (the vector of control variables) to move the internal state of a system from any initial state to any final state in a finite time interval. [1] :737

That is, we can informally define controllability as follows: If for any initial state and any final state there exists an input sequence to transfer the system state from to in a finite time interval, then the system modeled by the state-space representation is controllable. For the simplest example of a continuous, LTI system, the row dimension of the state space expression determines the interval; each row contributes a vector in the state space of the system. If there are not enough such vectors to span the state space of , then the system cannot achieve controllability. It may be necessary to modify and to better approximate the underlying differential relationships it estimates to achieve controllability.

Controllability does not mean that a reached state can be maintained, merely that any state can be reached.

Controllability does not mean that arbitrary paths can be made through state space, only that there exists a path within the prescribed finite time interval.

Continuous linear systems

Consider the continuous linear system [note 1]

There exists a control from state at time to state at time if and only if is in the column space of

where is the state-transition matrix, and is the Controllability Gramian.

In fact, if is a solution to then a control given by would make the desired transfer.

Note that the matrix defined as above has the following properties:

[2]

Rank condition for controllability

The Controllability Gramian involves integration of the state-transition matrix of a system. A simpler condition for controllability is a rank condition analogous to the Kalman rank condition for time-invariant systems.

Consider a continuous-time linear system smoothly varying in an interval of :

The state-transition matrix is also smooth. Introduce the n x m matrix-valued function and define

= .

Consider the matrix of matrix-valued functions obtained by listing all the columns of the , :

.

If there exists a and a nonnegative integer k such that , then is controllable. [3]

If is also analytically varying in an interval , then is controllable on every nontrivial subinterval of if and only if there exists a and a nonnegative integer k such that . [3]

The above methods can still be complex to check, since it involves the computation of the state-transition matrix . Another equivalent condition is defined as follow. Let , and for each , define

=

In this case, each is obtained directly from the data The system is controllable if there exists a and a nonnegative integer such that . [3]

Example

Consider a system varying analytically in and matrices

, Then and since this matrix has rank 3, the system is controllable on every nontrivial interval of .

Continuous linear time-invariant (LTI) systems

Consider the continuous linear time-invariant system

where

is the "state vector",
is the "output vector",
is the "input (or control) vector",
is the "state matrix",
is the "input matrix",
is the "output matrix",
is the "feedthrough (or feedforward) matrix".

The controllability matrix is given by

The system is controllable if the controllability matrix has full row rank (i.e. ).

Discrete linear time-invariant (LTI) systems

For a discrete-time linear state-space system (i.e. time variable ) the state equation is

where is an matrix and is a matrix (i.e. is inputs collected in a vector). The test for controllability is that the matrix

has full row rank (i.e., ). That is, if the system is controllable, will have columns that are linearly independent; if columns of are linearly independent, each of the states is reachable by giving the system proper inputs through the variable .

Derivation

Given the state at an initial time, arbitrarily denoted as k=0, the state equation gives then and so on with repeated back-substitutions of the state variable, eventually yielding

or equivalently

Imposing any desired value of the state vector on the left side, this can always be solved for the stacked vector of control vectors if and only if the matrix of matrices at the beginning of the right side has full row rank.

Example

For example, consider the case when and (i.e. only one control input). Thus, and are vectors. If has rank 2 (full rank), and so and are linearly independent and span the entire plane. If the rank is 1, then and are collinear and do not span the plane.

Assume that the initial state is zero.

At time :

At time :

At time all of the reachable states are on the line formed by the vector . At time all of the reachable states are linear combinations of and . If the system is controllable then these two vectors can span the entire plane and can be done so for time . The assumption made that the initial state is zero is merely for convenience. Clearly if all states can be reached from the origin then any state can be reached from another state (merely a shift in coordinates).

This example holds for all positive , but the case of is easier to visualize.

Analogy for example of n = 2

Consider an analogy to the previous example system. You are sitting in your car on an infinite, flat plane and facing north. The goal is to reach any point in the plane by driving a distance in a straight line, come to a full stop, turn, and driving another distance, again, in a straight line. If your car has no steering then you can only drive straight, which means you can only drive on a line (in this case the north-south line since you started facing north). The lack of steering case would be analogous to when the rank of is 1 (the two distances you drove are on the same line).

Now, if your car did have steering then you could easily drive to any point in the plane and this would be the analogous case to when the rank of is 2.

If you change this example to then the analogy would be flying in space to reach any position in 3D space (ignoring the orientation of the aircraft). You are allowed to:

Although the 3-dimensional case is harder to visualize, the concept of controllability is still analogous.

Nonlinear systems

Nonlinear systems in the control-affine form

are locally accessible about if the accessibility distribution spans space, when equals the rank of and R is given by: [4]

Here, is the repeated Lie bracket operation defined by

The controllability matrix for linear systems in the previous section can in fact be derived from this equation.

Null Controllability

If a discrete control system is null-controllable, it means that there exists a controllable so that for some initial state . In other words, it is equivalent to the condition that there exists a matrix such that is nilpotent.

This can be easily shown by controllable-uncontrollable decomposition.

Output controllability

Output controllability is the related notion for the output of the system (denoted y in the previous equations); the output controllability describes the ability of an external input to move the output from any initial condition to any final condition in a finite time interval. It is not necessary that there is any relationship between state controllability and output controllability. In particular:

For a linear continuous-time system, like the example above, described by matrices , , , and , the output controllability matrix

has full row rank (i.e. rank ) if and only if the system is output controllable. [1] :742

Controllability under input constraints

In systems with limited control authority, it is often no longer possible to move any initial state to any final state inside the controllable subspace. This phenomenon is caused by constraints on the input that could be inherent to the system (e.g. due to saturating actuator) or imposed on the system for other reasons (e.g. due to safety-related concerns). The controllability of systems with input and state constraints is studied in the context of reachability [5] and viability theory. [6]

Controllability in the behavioral framework

In the so-called behavioral system theoretic approach due to Willems (see people in systems and control), models considered do not directly define an inputoutput structure. In this framework systems are described by admissible trajectories of a collection of variables, some of which might be interpreted as inputs or outputs.

A system is then defined to be controllable in this setting, if any past part of a behavior (trajectory of the external variables) can be concatenated with any future trajectory of the behavior in such a way that the concatenation is contained in the behavior, i.e. is part of the admissible system behavior. [7] :151

Stabilizability

A slightly weaker notion than controllability is that of stabilizability. A system is said to be stabilizable when all uncontrollable state variables can be made to have stable dynamics. Thus, even though some of the state variables cannot be controlled (as determined by the controllability test above) all the state variables will still remain bounded during the system's behavior. [8]

Reachable set

Let T ∈ Т and x ∈ X (where X is the set of all possible states and Т is an interval of time). The reachable set from x in time T is defined as: [3]

, where xTz denotes that there exists a state transition from x to z in time T.

For autonomous systems the reachable set is given by :

,

where R is the controllability matrix.

In terms of the reachable set, the system is controllable if and only if .

Proof We have the following equalities:

Considering that the system is controllable, the columns of R should be linearly independent. So:

A related set to the reachable set is the controllable set, defined by:

.

The relation between reachability and controllability is presented by Sontag: [3]

(a) An n-dimensional discrete linear system is controllable if and only if:

(Where X is the set of all possible values or states of x and k is the time step).

(b) A continuous-time linear system is controllable if and only if:

for all e>0.

if and only if for all e>0.

Example Let the system be an n dimensional discrete-time-invariant system from the formula:

Φ(n,0,0,w)= (Where Φ(final time, initial time, state variable, restrictions) is defined is the transition matrix of a state variable x from an initial time 0 to a final time n with some restrictions w).

It follows that the future state is in ⇔ it is in the image of the linear map:

Im(R)=R(A,B)≜ Im(),

which maps,

→X

When and we identify R(A,B) with a n by nm matrix whose columns are the columns of in that order. If the system is controllable the rank of is n. If this is truth, the image of the linear map R is all of X. Based on that, we have:

with XЄ.

See also

Notes

  1. A linear time-invariant system behaves the same but with the coefficients being constant in time.

Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in pre-determined locations in the s-plane. Placing poles is desirable because the location of the poles corresponds directly to the eigenvalues of the system, which control the characteristics of the response of the system. The system must be considered controllable in order to implement this method.

<span class="mw-page-title-main">Vectorization (mathematics)</span> Conversion of a matrix or a tensor to a vector

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another:

In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices such that

<span class="mw-page-title-main">Estimation of signal parameters via rotational invariance techniques</span>

Estimation theory, or estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation, however, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.

<span class="mw-page-title-main">Stokes' theorem</span> Theorem in vector calculus

Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.

References

  1. 1 2 Katsuhiko Ogata (1997). Modern Control Engineering (3rd ed.). Upper Saddle River, NJ: Prentice-Hall. ISBN   978-0-13-227307-7.
  2. Brockett, Roger W. (1970). Finite Dimensional Linear Systems. John Wiley & Sons. ISBN   978-0-471-10585-5.
  3. 1 2 3 4 5 Eduardo D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems.
  4. Isidori, Alberto (1989). Nonlinear Control Systems, p. 92–3. Springer-Verlag, London. ISBN   3-540-19916-0.
  5. Claire J. Tomlin; Ian Mitchell; Alexandre M. Bayen; Meeko Oishi (2003). "Computational Techniques for the Verification of Hybrid Systems" (PDF). Proceedings of the IEEE. 91 (7): 986–1001. CiteSeerX   10.1.1.70.4296 . doi:10.1109/jproc.2003.814621 . Retrieved 2012-03-04.
  6. Jean-Pierre Aubin (1991). Viability Theory. Birkhauser. ISBN   978-0-8176-3571-8.
  7. Jan Polderman; Jan Willems (1998). Introduction to Mathematical Systems Theory: A Behavioral Approach (1st ed.). New York: Springer Verlag. ISBN   978-0-387-98266-3.
  8. Brian D.O. Anderson; John B. Moore (1990). Optimal Control: Linear Quadratic Methods. Englewood Cliffs, NJ: Prentice Hall. ISBN   978-0-13-638560-8.