LaSalle's invariance principle

Last updated

LaSalle's invariance principle (also known as the invariance principle, [1] Barbashin-Krasovskii-LaSalle principle, [2] or Krasovskii-LaSalle principle) is a criterion for the asymptotic stability of an autonomous (possibly nonlinear) dynamical system.

Contents

Global version

Suppose a system is represented as

where is the vector of variables, with

If a (see Smoothness) function can be found such that

for all (negative semidefinite),

then the set of accumulation points of any trajectory [ clarification needed ] is contained in where is the union of complete trajectories contained entirely in the set .

If we additionally have that the function is positive definite, i.e.

, for all

and if contains no trajectory of the system except the trivial trajectory for , then the origin is asymptotically stable.

Furthermore, if is radially unbounded, i.e.

, as

then the origin is globally asymptotically stable.

Local version

If

, when

hold only for in some neighborhood of the origin, and the set

does not contain any trajectories of the system besides the trajectory , then the local version of the invariance principle states that the origin is locally asymptotically stable.

Relation to Lyapunov theory

If is negative definite, then the global asymptotic stability of the origin is a consequence of Lyapunov's second theorem. The invariance principle gives a criterion for asymptotic stability in the case when is only negative semidefinite.

Examples

A plot of vector field
(
x
.
,
y
.
)
=
(
-
y
-
x
3
,
x
5
)
{\displaystyle ({\dot {x}},{\dot {y}})=(-y-x^{3},x^{5})}
and Lyapunov function
V
(
x
,
y
)
=
x
6
+
3
y
2
{\displaystyle V(x,y)=x^{6}+3y^{2}}
. LaSalle principle example.png
A plot of vector field and Lyapunov function .

Simple example

Example taken from "LaSalle's Invariance Principle, Lecture 23, Math 634", by Christopher Grant. [3]

Consider the vector field in the plane. The function satisfies , and is radially unbounded, showing that the origin is globally asymptotically stable.

Pendulum with friction

This section will apply the invariance principle to establish the local asymptotic stability of a simple system, the pendulum with friction. This system can be modeled with the differential equation [4]

where is the angle the pendulum makes with the vertical normal, is the mass of the pendulum, is the length of the pendulum, is the friction coefficient, and g is acceleration due to gravity.

This, in turn, can be written as the system of equations

Using the invariance principle, it can be shown that all trajectories that begin in a ball of certain size around the origin asymptotically converge to the origin. We define as

This is simply the scaled energy of the system. [4] Clearly, is positive definite in an open ball of radius around the origin. Computing the derivative,

Observe that and . If it were true that , we could conclude that every trajectory approaches the origin by Lyapunov's second theorem. Unfortunately, and is only negative semidefinite since can be non-zero when . However, the set

which is simply the set

does not contain any trajectory of the system, except the trivial trajectory . Indeed, if at some time , , then because must be less than away from the origin, and . As a result, the trajectory will not stay in the set .

All the conditions of the local version of the invariance principle are satisfied, and we can conclude that every trajectory that begins in some neighborhood of the origin will converge to the origin as . [5]

History

The general result was independently discovered by J.P. LaSalle (then at RIAS) and N.N. Krasovskii, who published in 1960 and 1959 respectively. While LaSalle was the first author in the West to publish the general theorem in 1960, a special case of the theorem was communicated in 1952 by Barbashin and Krasovskii, followed by a publication of the general result in 1959 by Krasovskii. [6]

See also

Original papers

Text books

Lectures

Related Research Articles

<span class="mw-page-title-main">Polar coordinate system</span> Coordinates comprising a distance and an angle

In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate, radial distance or simply radius, and the angle is called the angular coordinate, polar angle, or azimuth. Angles in polar notation are generally expressed in either degrees or radians.

<span class="mw-page-title-main">Equations of motion</span> Equations that describe the behavior of a physical system

In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.

<span class="mw-page-title-main">Lyapunov exponent</span> The rate of separation of infinitesimally close trajectories

In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by

<span class="mw-page-title-main">Hamiltonian mechanics</span> Formulation of classical mechanics using momenta

In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.

In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state-space Markov chains usually under the name Foster–Lyapunov functions.

Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.

In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.

<span class="mw-page-title-main">Stability theory</span> Part of mathematics that addresses the stability of solutions

In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or asymptotically stable. Lyapunov stability means that if the system starts in a state in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to . A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state asymptotically by applying the control u.

In control theory, backstepping is a technique developed circa 1990 by Myroslav Sparavalo, Petar V. Kokotovic, and others for designing stabilizing controls for a special class of nonlinear dynamical systems. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this recursive structure, the designer can start the design process at the known-stable system and "back out" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as backstepping.

In mathematical physics, the Hunter–Saxton equation

In nonlinear control and stability theory, the circle criterion is a stability criterion for nonlinear time-varying systems. It can be viewed as a generalization of the Nyquist stability criterion for linear time-invariant (LTI) systems.

In control theory, dynamical systems are in strict-feedback form when they can be expressed as

<span class="mw-page-title-main">Lagrangian mechanics</span> Formulation of classical mechanics

In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.

<span class="mw-page-title-main">Gauge theory</span> Physical theory with fields invariant under the action of local "gauge" Lie groups

In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, do not change under local transformations according to certain smooth families of operations. Formally, the Lagrangian is invariant.

Input-to-state stability (ISS) is a stability notion widely used to study stability of nonlinear control systems with external inputs. Roughly speaking, a control system is ISS if it is globally asymptotically stable in the absence of external inputs and if its trajectories are bounded by a function of the size of the input for all sufficiently large times. The importance of ISS is due to the fact that the concept has bridged the gap between input–output and state-space methods, widely used within the control systems community.

In dynamical systems theory, the Olech theorem establishes sufficient conditions for global asymptotic stability of a two-equation system of non-linear differential equations. The result was established by Czesław Olech in 1963, based on joint work with Philip Hartman.

References

  1. Khalil, Hasan (2002). Nonlinear Systems (3rd ed.). Upper Saddle River NJ: Prentice Hall.
  2. Wassim, Haddad; Chellaboina, VijaySekhar (2008). Nonlinear Dynamical Systems and Control, a Lyapunov-based approach. Princeton University Press.
  3. Grant, Christopher (1999-10-22). "LaSalle's Invariance Principle, Lecture 23, Math 634" (PDF). Archived from the original (PDF) on 2019-07-14. Retrieved 2022-06-28.
  4. 1 2 Lecture notes on nonlinear control, University of Notre Dame, Instructor: Michael Lemmon, lecture 4.
  5. Lecture notes on nonlinear analysis, National Taiwan University, Instructor: Feng-Li Lian, lecture 4-2.
  6. Vidyasagar, M. Nonlinear Systems Analysis, SIAM Classics in Applied Mathematics, SIAM Press, 2002.