Nonlinear control

Last updated
A feedback control system. It is desired to control a system (often called the plant) so its output follows a desired reference signal. A sensor monitors the output and a controller subtracts the actual output from the desired reference output, and applies this error signal to the system to bring the output closer to the reference. In a nonlinear control system at least one of the blocks, system, sensor, or controller, is nonlinear. Feedback loop with descriptions.svg
A feedback control system. It is desired to control a system (often called the plant) so its output follows a desired reference signal. A sensor monitors the output and a controller subtracts the actual output from the desired reference output, and applies this error signal to the system to bring the output closer to the reference. In a nonlinear control system at least one of the blocks, system, sensor, or controller, is nonlinear.

Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.

Contents

Control theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion.

Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. [1] Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.

An example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either "on" or "off", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the "turn on" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the "turn off" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This cycling of the temperature about the desired temperature is called a limit cycle , and is characteristic of nonlinear control systems.

Properties of nonlinear systems

Some properties of nonlinear dynamic systems are

Analysis and control of nonlinear systems

There are several well-developed techniques for analyzing nonlinear feedback systems:

Control design techniques for nonlinear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:

Those that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:

And Lyapunov based methods:

Nonlinear feedback analysis – The Lur'e problem

Lur'e problem block diagram Lur'e Problem Block.jpg
Lur'e problem block diagram

An early nonlinear feedback system analysis problem was formulated by A. I. Lur'e. Control systems described by the Lur'e problem have a forward path that is linear and time-invariant, and a feedback path that contains a memory-less, possibly time-varying, static nonlinearity.

The linear part can be characterized by four matrices (A,B,C,D), while the nonlinear part is Φ(y) with (a sector nonlinearity).

Absolute stability problem

Consider:

  1. (A,B) is controllable and (C,A) is observable
  2. two real numbers a, b with a < b, defining a sector for function Φ

The Lur'e problem (also known as the absolute stability problem) is to derive conditions involving only the transfer matrix H(s) and {a,b} such that x = 0 is a globally uniformly asymptotically stable equilibrium of the system.

There are two well-known wrong conjectures on the absolute stability problem:

Graphically, these conjectures can be interpreted in terms of graphical restrictions on the graph of Φ(y) xy or also on the graph of dΦ/dyx Φ/y. [2] There are counterexamples to Aizerman's and Kalman's conjectures such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution—hidden oscillation.

There are two main theorems concerning the Lur'e problem which give sufficient conditions for absolute stability:

Theoretical results in nonlinear control

Frobenius theorem

The Frobenius theorem is a deep result in differential geometry. When applied to nonlinear control, it says the following: Given a system of the form

where , are vector fields belonging to a distribution and are control functions, the integral curves of are restricted to a manifold of dimension if and is an involutive distribution.

See also

Related Research Articles

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.

<span class="mw-page-title-main">Lyapunov exponent</span> The rate of separation of infinitesimally close trajectories

In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by

<span class="mw-page-title-main">Negative feedback</span> Reuse of output to stabilize a system

Negative feedback occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. A classic example of negative feedback is a heating system thermostat — when the temperature gets high enough, the heater is turned OFF. When the temperature gets too cold, the heat is turned back ON. In each case the "feedback" generated by the thermostat "negates" the trend.

Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.

Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.

Vasile Mihai Popov is a leading systems theorist and control engineering specialist. He is well known for having developed a method to analyze stability of nonlinear dynamical systems, now known as Popov criterion.

In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators.

The Kalman–Yakubovich–Popov lemma is a result in system analysis and control theory which states: Given a number , two n-vectors B, C and an n x n Hurwitz matrix A, if the pair is completely controllable, then a symmetric matrix P and a vector Q satisfying

In mathematics, the Markus–Yamabe conjecture is a conjecture on global asymptotic stability. If the Jacobian matrix of a dynamical system at a fixed point is Hurwitz, then the fixed point is asymptotically stable. Markus-Yamabe conjecture asks if a similar result holds globally. Precisely, the conjecture states that if a continuously differentiable map on an -dimensional real vector space has a fixed point, and its Jacobian matrix is everywhere Hurwitz, then the fixed point is globally stable.

In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved by designing an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system. Thus the problem can be broken into two separate parts, which facilitates the design.

In nonlinear control, Aizerman's conjecture or Aizerman problem states that a linear system in feedback with a sector nonlinearity would be stable if the linear system is stable for any linear gain of the sector. This conjecture, proposed by Mark Aronovich Aizerman in 1949, was proven false but led to the (valid) sufficient criteria on absolute stability.

In nonlinear control and stability theory, the circle criterion is a stability criterion for nonlinear time-varying systems. It can be viewed as a generalization of the Nyquist stability criterion for linear time-invariant (LTI) systems.

In mathematical physics and the theory of partial differential equations, the solitary wave solution of the form is said to be orbitally stable if any solution with the initial data sufficiently close to forever remains in a given small neighborhood of the trajectory of

In the bifurcation theory, a bounded oscillation that is born without loss of stability of stationary set is called a hidden oscillation. In nonlinear control theory, the birth of a hidden oscillation in a time-invariant control system with bounded states means crossing a boundary, in the domain of the parameters, where local stability of the stationary states implies global stability. If a hidden oscillation attracts all nearby oscillations, then it is called a hidden attractor. For a dynamical system with a unique equilibrium point that is globally attractive, the birth of a hidden attractor corresponds to a qualitative change in behaviour from monostability to bi-stability. In the general case, a dynamical system may turn out to be multistable and have coexisting local attractors in the phase space. While trivial attractors, i.e. stable equilibrium points, can be easily found analytically or numerically, the search of periodic and chaotic attractors can turn out to be a challenging problem.

Kalman's conjecture or Kalman problem is a disproved conjecture on absolute stability of nonlinear control system with one scalar nonlinearity, which belongs to the sector of linear stability. Kalman's conjecture is a strengthening of Aizerman's conjecture and is a special case of Markus–Yamabe conjecture. This conjecture was proven false but led to the (valid) sufficient criteria on absolute stability.

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

In nonlinear control and stability theory, the Popov criterion is a stability criterion discovered by Vasile M. Popov for the absolute stability of a class of nonlinear systems whose nonlinearity must satisfy an open-sector condition. While the circle criterion can be applied to nonlinear time-varying systems, the Popov criterion is applicable only to autonomous systems.

Nikolay Vladimirovich Kuznetsov is a specialist in nonlinear dynamics and control theory.

William F. Egan was well-known expert and author in the area of PLLs. The first and second editions of his book Frequency Synthesis by Phase Lock as well as his book Phase-Lock Basics are references among electrical engineers specializing in areas involving PLLs.

References

  1. trim point
  2. Naderi, T.; Materassi, D.; Innocenti, G.; Genesio, R. (2019). "Revisiting Kalman and Aizerman Conjectures via a Graphical Interpretation". IEEE Transactions on Automatic Control. 64 (2): 670–682. doi:10.1109/TAC.2018.2849597. ISSN   0018-9286. S2CID   59553748.

Further reading