This article needs additional citations for verification .(August 2014) |
In the theory of dynamical systems and control theory, a linear time-invariant system is marginally stable if it is neither asymptotically stable nor unstable. Roughly speaking, a system is stable if it always returns to and stays near a particular state (called the steady state), and is unstable if it goes further and further away from any state, without being bounded. A marginal system, sometimes referred to as having neutral stability, [1] is between these two types: when displaced, it does not return to near a common steady state, nor does it go away from where it started without limit.
Marginal stability, like instability, is a feature that control theory seeks to avoid; we wish that, when perturbed by some external force, a system will return to a desired state. This necessitates the use of appropriately designed control algorithms.
In econometrics, the presence of a unit root in observed time series, rendering them marginally stable, can lead to invalid regression results regarding effects of the independent variables upon a dependent variable, unless appropriate techniques are used to convert the system to a stable system.
A homogeneous continuous linear time-invariant system is marginally stable if and only if the real part of every pole (eigenvalue) in the system's transfer-function is non-positive, one or more poles have zero real part, and all poles with zero real part are simple roots (i.e. the poles on the imaginary axis are all distinct from one another). In contrast, if all the poles have strictly negative real parts, the system is instead asymptotically stable. If the system is neither stable nor marginally stable, it is unstable.
If the system is in state space representation, marginal stability can be analyzed by deriving the Jordan normal form: [2] if and only if the Jordan blocks corresponding to poles with zero real part are scalar is the system marginally stable.
A homogeneous discrete time linear time-invariant system is marginally stable if and only if the greatest magnitude of any of the poles (eigenvalues) of the transfer function is 1, and the poles with magnitude equal to 1 are all distinct. That is, the transfer function's spectral radius is 1. If the spectral radius is less than 1, the system is instead asymptotically stable.
A simple example involves a single first-order linear difference equation: Suppose a state variable x evolves according to
with parameter a > 0. If the system is perturbed to the value its subsequent sequence of values is If a < 1, these numbers get closer and closer to 0 regardless of the starting value while if a > 1 the numbers get larger and larger without bound. But if a = 1, the numbers do neither of these: instead, all future values of x equal the value Thus the case a = 1 exhibits marginal stability.
A marginally stable system is one that, if given an impulse of finite magnitude as input, will not "blow up" and give an unbounded output, but neither will the output return to zero. A bounded offset or oscillations in the output will persist indefinitely, and so there will in general be no final steady-state output. If a continuous system is given an input at a frequency equal to the frequency of a pole with zero real part, the system's output will increase indefinitely (this is known as pure resonance [3] ). This explains why for a system to be BIBO stable, the real parts of the poles have to be strictly negative (and not just non-positive).
A continuous system having imaginary poles, i.e. having zero real part in the pole(s), will produce sustained oscillations in the output. For example, an undamped second-order system such as the suspension system in an automobile (a mass–spring–damper system), from which the damper has been removed and spring is ideal, i.e. no friction is there, will in theory oscillate forever once disturbed. Another example is a frictionless pendulum. A system with a pole at the origin is also marginally stable but in this case there will be no oscillation in the response as the imaginary part is also zero (jw = 0 means w = 0 rad/sec). An example of such a system is a mass on a surface with friction. When a sidewards impulse is applied, the mass will move and never returns to zero. The mass will come to rest due to friction however, and the sidewards movement will remain bounded.
Since the locations of the marginal poles must be exactly on the imaginary axis or unit circle (for continuous time and discrete time systems respectively) for a system to be marginally stable, this situation is unlikely to occur in practice unless marginal stability is an inherent theoretical feature of the system.
Marginal stability is also an important concept in the context of stochastic dynamics. For example, some processes may follow a random walk, given in discrete time as
where is an i.i.d. error term. This equation has a unit root (a value of 1 for the eigenvalue of its characteristic equation), and hence exhibits marginal stability, so special time series techniques must be used in empirically modeling a system containing such an equation.
Marginally stable Markov processes are those that possess null recurrent classes.
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration that matches its natural frequency. When this happens, the object or system absorbs energy from the external force and starts vibrating with a larger amplitude. Resonance can occur in various systems, such as mechanical, electrical, or acoustic systems, and it is often desirable in certain applications, such as musical instruments or radio receivers. However, resonance can also be detrimental, leading to excessive vibrations or even structural failure in some cases.
In mathematics, particularly in dynamical systems, a bifurcation diagram shows the values visited or approached asymptotically of a system as a function of a bifurcation parameter in the system. It is usual to represent stable values with a solid line and unstable values with a dotted line, although often the unstable points are omitted. Bifurcation diagrams enable the visualization of bifurcation theory. In the context of discrete-time dynamical systems, the diagram is also called orbit diagram.
In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed.
Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.
In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.
Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response that does not become exactly zero past a certain point but continues indefinitely. This is in contrast to a finite impulse response (FIR) system, in which the impulse response does become exactly zero at times for some finite , thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as IIR systems or IIR filters.
The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.
In control theory, a continuous linear time-invariant system (LTI) is exponentially stable if and only if the system has eigenvalues with strictly negative real parts. A discrete-time input-to-output LTI system is exponentially stable if and only if the poles of its transfer function lie strictly within the unit circle centered on the origin of the complex plane. Systems that are not LTI are exponentially stable if their convergence is bounded by exponential decay. Exponential stability is a form of asymptotic stability, valid for more general dynamical systems.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family of curves, such as the integral curves of a family of vector fields, and the solutions of a family of differential equations. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values of a system causes a sudden 'qualitative' or topological change in its behavior. Bifurcations occur in both continuous systems and discrete systems.
In mathematics, in the theory of differential equations and dynamical systems, a particular stationary or quasistationary solution to a nonlinear system is called linearly unstable if the linearization of the equation at this solution has the form , where r is the perturbation to the steady state, A is a linear operator whose spectrum contains eigenvalues with positive real part. If all the eigenvalues have negative real part, then the solution is called linearlystable. Other names for linear stability include exponential stability or stability in terms of first approximation. If there exists an eigenvalue with zero real part then the question about stability cannot be solved on the basis of the first approximation and we approach the so-called "centre and focus problem".
In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling.
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.
The Orr–Sommerfeld equation, in fluid dynamics, is an eigenvalue equation describing the linear two-dimensional modes of disturbance to a viscous parallel flow. The solution to the Navier–Stokes equations for a parallel, laminar flow can become unstable if certain conditions on the flow are satisfied, and the Orr–Sommerfeld equation determines precisely what the conditions for hydrodynamic stability are.
An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.
A matrix difference equation is a difference equation in which the value of a vector of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The order of the equation is the maximum time gap between any two indicated values of the variable vector. For example,
In fluid dynamics, Rayleigh's equation or Rayleigh stability equation is a linear ordinary differential equation to study the hydrodynamic stability of a parallel, incompressible and inviscid shear flow. The equation is: