Full state feedback

Last updated

Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in predetermined locations in the s-plane. [1] Placing poles is desirable because the location of the poles corresponds directly to the eigenvalues of the system, which control the characteristics of the response of the system. The system must be considered controllable in order to implement this method.

Contents

Principle

System in open-loop State-system.jpg
System in open-loop

If the closed-loop dynamics can be represented by the state space equation (see State space (controls))

with output equation

then the poles of the system transfer function are the roots of the characteristic equation given by

Full state feedback is utilized by commanding the input vector . Consider an input proportional (in the matrix sense) to the state vector,

System with state feedback (closed-loop) Feedback-system.jpg
System with state feedback (closed-loop)
.

Substituting into the state space equations above, we have

The poles of the FSF system are given by the characteristic equation of the matrix , . Comparing the terms of this equation with those of the desired characteristic equation yields the values of the feedback matrix which force the closed-loop eigenvalues to the pole locations specified by the desired characteristic equation. [2]

Example of FSF

Consider a system given by the following state space equations:

The uncontrolled system has open-loop poles at and . These poles are the eigenvalues of the matrix and they are the roots of . Suppose, for considerations of the response, we wish the controlled system eigenvalues to be located at and , which are not the poles we currently have. The desired characteristic equation is then , from .

Following the procedure given above, the FSF controlled system characteristic equation is

where

Upon setting this characteristic equation equal to the desired characteristic equation, we find

.

Therefore, setting forces the closed-loop poles to the desired locations, affecting the response as desired.

This only works for Single-Input systems. Multiple input systems will have a matrix that is not unique. Choosing, therefore, the best values is not trivial. A linear-quadratic regulator might be used for such applications[ citation needed ].

See also

Related Research Articles

Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

Ray transfer matrix analysis is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element is described by a 2×2 ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

Hmethods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H techniques have the advantage over classical control techniques in that H techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

<span class="mw-page-title-main">Euler's rotation theorem</span> Movement with a fixed point is rotation

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space.

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In systems theory, closed-loop poles are the positions of the poles of a closed-loop transfer function in the s-plane. The open-loop transfer function is equal to the product of all transfer function blocks in the forward path in the block diagram. The closed-loop transfer function is obtained by dividing the open-loop transfer function by the sum of one and the product of all transfer function blocks throughout the negative feedback loop. The closed-loop transfer function may also be obtained by algebraic or block diagram manipulation. Once the closed-loop transfer function is obtained for the system, the closed-loop poles are obtained by solving the characteristic equation. The characteristic equation is nothing more than setting the denominator of the closed-loop transfer function to zero.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

A matrix difference equation is a difference equation in which the value of a vector of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The order of the equation is the maximum time gap between any two indicated values of the variable vector. For example,

In control theory, Ackermann's formula is a control system design method for solving the pole allocation problem for invariant-time systems by Jürgen Ackermann. One of the primary problems in control system design is the creation of controllers that will change the dynamics of a system by changing the eigenvalues of the matrix representing the dynamics of the closed-loop system. This is equivalent to changing the poles of the associated transfer function in the case that there is no cancellation of poles and zeros.

References

    • Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition. Springer. ISBN   0-387-98489-5.
  1. Control Design Using Pole Placement