Stability radius

Last updated

The stability radius of an object (system, function, matrix, parameter) at a given nominal point is the radius of the largest ball, centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this:

Contents

Radius of stability 1.png

where denotes the nominal point, denotes the space of all possible values of the object , and the shaded area, , represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius.

Abstract definition

The formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful [1] [2]

where denotes a closed ball of radius in centered at .

History

It looks like the concept was invented in the early 1960s. [3] [4] In the 1980s it became popular in control theory [5] and optimization. [6] It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest.

Relation to Wald's maximin model

It was shown [2] that the stability radius model is an instance of Wald's maximin model. That is,

where

The large penalty () is a device to force the player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one.

Info-gap decision theory

Info-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown [2] that its robustness model, namely

is actually a stability radius model characterized by a simple stability requirement of the form where denotes the decision under consideration, denotes the parameter of interest, denotes the estimate of the true value of and denotes a ball of radius centered at .

Infogap robustness.png

Since stability radius models are designed to deal with small perturbations in the nominal value of a parameter, info-gap's robustness model measures the local robustness of decisions in the neighborhood of the estimate .

Sniedovich [2] argues that for this reason the theory is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space.

Alternate definition

There are cases where it is more convenient to define the stability radius slightly different. For example, in many applications in control theory the radius of stability is defined as the size of the smallest destabilizing perturbation in the nominal value of the parameter of interest. [7] The picture is this:

Radius of stability 3.png

More formally,

where denotes the distance of from .

Stability radius of functions

The stability radius of a continuous function f (in a functional space F) with respect to an open stability domain D is the distance between f and the set of unstable functions (with respect to D). We say that a function is stable with respect to D if its spectrum is in D. Here, the notion of spectrum is defined on a case by case basis, as explained below.

Definition

Formally, if we denote the set of stable functions by S(D) and the stability radius by r(f,D), then:

where C is a subset of F.

Note that if f is already unstable (with respect to D), then r(f,D)=0 (as long as C contains zero).

Applications

The notion of stability radius is generally applied to special functions as polynomials (the spectrum is then the roots) and matrices (the spectrum is the eigenvalues). The case where C is a proper subset of F permits us to consider structured perturbations (e.g. for a matrix, we could only need perturbations on the last row). It is an interesting measure of robustness, for example in control theory.

Properties

Let f be a (complex) polynomial of degree n, C=F be the set of polynomials of degree less than (or equal to) n (which we identify here with the set of coefficients). We take for D the open unit disk, which means we are looking for the distance between a polynomial and the set of Schur stable polynomials. Then:

where q contains each basis vector (e.g. when q is the usual power basis). This result means that the stability radius is bound with the minimal value that f reaches on the unit circle.

Examples

See also

Related Research Articles

Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic at all finite points over the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function f(z) has a root at w, then f(z)/(z−w), taking the limit value at w, is an entire function. On the other hand, neither the natural logarithm nor the square root is an entire function, nor can they be continued analytically to an entire function.

In mathematics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as

In the field of representation theory in mathematics, a projective representation of a group G on a vector space V over a field F is a group homomorphism from G to the projective linear group

In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is one-to-one with the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring.

Fermis interaction Mechanism of beta decay proposed in 1933

In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.

Rayleigh–Taylor instability Unstable behavior of two contacting fluids of different densities

The Rayleigh–Taylor instability, or RT instability, is an instability of an interface between two fluids of different densities which occurs when the lighter fluid is pushing the heavier fluid. Examples include the behavior of water suspended above oil in the gravity of Earth, mushroom clouds like those from volcanic eruptions and atmospheric nuclear explosions, supernova explosions in which expanding core gas is accelerated into denser shell gas, instabilities in plasma fusion reactors and inertial confinement fusion.

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation.

Info-gap decision theory is a non-probabilistic decision theory that seeks to optimize robustness to failure – or opportuneness for windfall – under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.

The Hitchin functional is a mathematical concept with applications in string theory that was introduced by the British mathematician Nigel Hitchin. Hitchin (2000) and Hitchin (2001) are the original articles of the Hitchin functional.

Robust optimization is a field of optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution.

In mathematics, Reidemeister torsion is a topological invariant of manifolds introduced by Kurt Reidemeister for 3-manifolds and generalized to higher dimensions by Wolfgang Franz (1935) and Georges de Rham (1936). Analytic torsion is an invariant of Riemannian manifolds defined by Daniel B. Ray and Isadore M. Singer as an analytic analogue of Reidemeister torsion. Jeff Cheeger and Werner Müller (1978) proved Ray and Singer's conjecture that Reidemeister torsion and analytic torsion are the same for compact Riemannian manifolds.

f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl. It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.

In astrophysics, the virial mass is the mass of a gravitationally bound astrophysical system, assuming the virial theorem applies. In the context of galaxy formation and dark matter halos, the virial mass is defined as the mass enclosed within the virial radius of a gravitationally bound system, a radius within which the system obeys the virial theorem. The virial radius is determined using a "top-hat" model. A spherical "top hat" density perturbation destined to become a galaxy begins to expand, but the expansion is halted and reversed due to the mass collapsing under gravity until the sphere reaches equilibrium – it is said to be virialized. Within this radius, the sphere obeys the virial theorem which says that the average kinetic energy is equal to minus one half times the average potential energy, , and this radius defines the virial radius.

Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy.

Causal fermion system

The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory. Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale. As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.

In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant.

Batch normalization is a technique for improving the speed, performance, and stability of artificial neural networks. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by re-centering and re-scaling.

References

  1. Zlobec S. (2009). Nondifferentiable optimization: Parametric programming. Pp. 2607-2615, in Encyclopedia of Optimization, Floudas C.A and Pardalos, P.M. editors, Springer.
  2. 1 2 3 4 Sniedovich, M. (2010). A bird's view of info-gap decision theory. Journal of Risk Finance, 11(3), 268-283.
  3. Wilf, H.S. (1960). Maximally stable numerical integration. Journal of the Society for Industrial and Applied Mathematics, 8(3),537-540.
  4. Milne, W.E., and Reynolds, R.R. (1962). Fifth-order methods for the numerical solution of ordinary differential equations. Journal of the ACM, 9(1), 64-70.
  5. Hindrichsen, D. and Pritchard, A.J. (1986). Stability radii of linear systems, Systems and Control Letters, 7, 1-10.
  6. Zlobec S. (1988). Characterizing Optimality in Mathematical Programming Models. Acta Applicandae Mathematicae, 12, 113-180.
  7. Paice A.D.B. and Wirth, F.R. (1998). Analysis of the Local Robustness of Stability for Flows. Mathematics of Control, Signals, and Systems , 11, 289-302.