Ackermann's formula

Last updated

In control theory, Ackermann's formula provides a method for designing controllers to achieve desired system behavior by directly calculating the feedback gains needed to place the closed-loop system's poles (eigenvalues) [1] at specific locations ( pole allocation problem).

Contents

These poles directly influence how the system responds to inputs and disturbances. Ackermann's formula provides a direct way to calculate the necessary adjustments—specifically, the feedback gains—needed to move the system's poles to the target locations. This method, developed by Jürgen Ackermann, [2] is particularly useful for systems that don't change over time (time-invariant systems), allowing engineers to precisely control the system's dynamics, such as its stability and responsiveness.

State feedback control

Consider a linear continuous-time invariant system with a state-space representation

where x is the state vector, u is the input vector, and A, B, C are matrices of compatible dimensions that represent the dynamics of the system. An input-output description of this system is given by the transfer function

where det is the determinant and adj is the adjugate. Since the denominator of the right equation is given by the characteristic polynomial of A, the poles of G are eigenvalues of A (note that the converse is not necessarily true, since there may be cancellations between terms of the numerator and the denominator). If the system is unstable, or has a slow response or any other characteristic that does not specify the design criteria, it could be advantageous to make changes to it. The matrices A, B, C, however, may represent physical parameters of a system that cannot be altered. Thus, one approach to this problem might be to create a feedback loop with a gain k that will feed the state variable x into the input u.

If the system is controllable, there is always an input u(t) such that any state x0 can be transferred to any other state x(t). With that in mind, a feedback loop can be added to the system with the control input u(t) = r(t) kx(t), such that the new dynamics of the system will be

In this new realization, the poles will be dependent on the characteristic polynomial Δnew of ABk, that is

Ackermann's formula

Computing the characteristic polynomial and choosing a suitable feedback matrix can be a challenging task, especially in larger systems. One way to make computations easier is through Ackermann's formula. For simplicity's sake, consider a single input vector with no reference parameter r, such as

where kT is a feedback vector of compatible dimensions. Ackermann's formula states that the design process can be simplified by only computing the following equation:

in which Δnew(A) is the desired characteristic polynomial evaluated at matrix A, and is the controllability matrix of the system.

Proof

This proof is based on Encyclopedia of Life Support Systems entry on Pole Placement Control. [3] Assume that the system is controllable. The characteristic polynomial of is given by

Calculating the powers of ACL results in

Replacing the previous equations into Δ(ACL) yields

Rewriting the above equation as a matrix product and omitting terms that kT does not appear isolated yields

From the Cayley–Hamilton theorem, Δ(ACL) = 0, thus

Note that is the controllability matrix of the system. Since the system is controllable, is invertible. Thus,

To find kT, both sides can be multiplied by the vector giving

Thus,

Example

Consider [4]

We know from the characteristic polynomial of A that the system is unstable since

the matrix A will only have positive eigenvalues. Thus, to stabilize the system we shall put a feedback gain

From Ackermann's formula, we can find a matrix k that will change the system so that its characteristic equation will be equal to a desired polynomial. Suppose we want

Thus, and computing the controllability matrix yields

Also, we have that

Finally, from Ackermann's formula

State observer design

Ackermann's formula can also be used for the design of state observers. Consider the linear discrete-time observed system

with observer gain L. Then Ackermann's formula for the design of state observers is noted as

with observability matrix . Here it is important to note, that the observability matrix and the system matrix are transposed: and AT.

Ackermann's formula can also be applied on continuous-time observed systems.

See also

Related Research Articles

<span class="mw-page-title-main">Lorentz transformation</span> Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics.

<span class="mw-page-title-main">Moment of inertia</span> Scalar measure of the rotational inertia with respect to a fixed axis of rotation

The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relative to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis. It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass & distance from the axis.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

<span class="mw-page-title-main">Discretization</span> Conversion of continuous functions into discrete counterparts

In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable.

In geodesy, conversion among different geographic coordinate systems is made necessary by the different geographic coordinate systems in use across the world and over time. Coordinate conversion is composed of a number of different types of conversion: format change of geographic coordinates, conversion of coordinate systems, or transformation to different geodetic datums. Geographic coordinate conversion has applications in cartography, surveying, navigation and geographic information systems.

In physics, the S-matrix or scattering matrix is a matrix that relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L with , the LLL algorithm calculates an LLL-reduced lattice basis in time where is the largest length of under the Euclidean norm, that is, .

In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s.

<span class="mw-page-title-main">Electromagnetic tensor</span> Mathematical object that describes the electromagnetic field in spacetime

In electromagnetism, the electromagnetic tensor or electromagnetic field tensor is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.

<span class="mw-page-title-main">Covariant formulation of classical electromagnetism</span> Ways of writing certain laws of physics

The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.

A multi-compartment model is a type of mathematical model used for describing the way materials or energies are transmitted among the compartments of a system. Sometimes, the physical system that we try to model in equations is too complex, so it is much easier to discretize the problem and reduce the number of parameters. Each compartment is assumed to be a homogeneous entity within which the entities being modeled are equivalent. A multi-compartment model is classified as a lumped parameters model. Similar to more general mathematical models, multi-compartment models can treat variables as continuous, such as a differential equation, or as discrete, such as a Markov chain. Depending on the system being modeled, they can be treated as stochastic or deterministic.

In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.

In statistics, the matrix t-distribution is the generalization of the multivariate t-distribution from vectors to matrices.

The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.

In mathematics, the Khatri–Rao product or block Kronecker product of two partitioned matrices and is defined as

In mathematics, the progressive-iterative approximation method is an iterative method of data fitting with geometric meanings. Given a set of data points to be fitted, the method obtains a series of fitting curves by iteratively updating the control points, and the limit curve (surface) can interpolate or approximate the given data points. It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process. Therefore, it has been widely used in geometric design and related fields.

References

  1. Modern Control System Theory and Design, 2nd Edition by Stanley M. Shinners
  2. Ackermann, J. (1972). "Der Entwurf linearer Regelungssysteme im Zustandsraum" (PDF). At - Automatisierungstechnik. 20 (1–12): 297–300. doi:10.1524/auto.1972.20.112.297. ISSN   2196-677X. S2CID   111291582.
  3. Ackermann, J. E. (2009). "Pole Placement Control". Control systems, robotics and automation. Unbehauen, Heinz. Oxford: Eolss Publishers Co. Ltd. ISBN   9781848265905. OCLC   703352455.
  4. "Topic #13 : 16.31 Feedback Control" (PDF). Web.mit.edu. Retrieved 2017-07-06.