In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling.
Center manifolds play an important role in bifurcation theory because interesting behavior takes place on the center manifold and in multiscale mathematics because the long time dynamics of the micro-scale often are attracted to a relatively simple center manifold involving the coarse scale variables.
Saturn's rings capture much center-manifold geometry. Dust particles in the rings are subject to tidal forces, which act characteristically to "compress and stretch". The forces compress particle orbits into the rings, stretch particles along the rings, and ignore small shifts in ring radius. The compressing direction defines the stable manifold, the stretching direction defining the unstable manifold, and the neutral direction is the center manifold.
While geometrically accurate, one major difference distinguishes Saturn's rings from a physical center manifold. Like most dynamical systems, particles in the rings are governed by second-order laws. Understanding trajectories requires modeling position and a velocity/momentum variable, to give a tangent manifold structure called phase space. Physically speaking, the stable, unstable and neutral manifolds of Saturn's ring system do not divide up the coordinate space for a particle's position; they analogously divide up phase space instead.
The center manifold typically behaves as an extended collection of saddle points. Some position-velocity pairs are driven towards the center manifold, while others are flung away from it. Small perturbations that generally push them about randomly, and often push them out of the center manifold. There are, however, dramatic counterexamples to instability at the center manifold, called Lagrangian coherent structures. The entire unforced rigid body dynamics of a ball is a center manifold. [1]
A much more sophisticated example is the Anosov flow on tangent bundles of Riemann surfaces. In that case, the tangent space splits very explicitly and precisely into three parts: the unstable and stable bundles, with the neutral manifold wedged between.
The center manifold of a dynamical system is based upon an equilibrium point of that system. A center manifold of the equilibrium then consists of those nearby orbits that neither decay nor grow exponentially quickly.
Mathematically, the first step when studying equilibrium points of dynamical systems is to linearize the system, and then compute its eigenvalues and eigenvectors. The eigenvectors (and generalized eigenvectors if they occur) corresponding to eigenvalues with negative real part form a basis for the stable eigenspace. The (generalized) eigenvectors corresponding to eigenvalues with positive real part form the unstable eigenspace.
Algebraically, let
be a dynamical system, linearized about equilibrium point x*. The Jacobian matrix (Df)(x*) defines three main subspaces:
Depending upon the application, other invariant subspaces of the linearized equation may be of interest, including center-stable, center-unstable, sub-center, slow, and fast subspaces.
If the equilibrium point is hyperbolic (that is, all eigenvalues of the linearization have nonzero real part), then the Hartman-Grobman theorem guarantees that these eigenvalues and eigenvectors completely characterise the system's dynamics near the equilibrium. However, if the equilibrium has eigenvalues whose real part is zero, then the corresponding (generalized) eigenvectors form the center eigenspace. Going beyond the linearization, when we account for perturbations by nonlinearity or forcing in the dynamical system, the center eigenspace deforms to the nearby center manifold. [3]
If the eigenvalues are precisely zero (as they are for the ball), rather than just real-part being zero, then the corresponding eigenspace more specifically gives rise to a slow manifold. The behavior on the center (slow) manifold is generally not determined by the linearization and thus may be difficult to construct.
Analogously, nonlinearity or forcing in the system perturbs the stable and unstable eigenspaces to a nearby stable manifold and nearby unstable manifold. [4] These three types of manifolds are three cases of an invariant manifold.
Corresponding to the linearized system, the nonlinear system has invariant manifolds, each consisting of sets of orbits of the nonlinear system. [5]
The center manifold existence theorem states that if the right-hand side function is ( times continuously differentiable), then at every equilibrium point there exists a neighborhood of some finite size in which there is at least one of [6]
In example applications, a nonlinear coordinate transform to a normal form can clearly separate these three manifolds. [7]
In the case when the unstable manifold does not exist, center manifolds are often relevant to modelling. The center manifold emergence theorem then says that the neighborhood may be chosen so that all solutions of the system staying in the neighborhood tend exponentially quickly to some solution on the center manifold; in formulas,
for some rate β. [8] This theorem asserts that for a wide variety of initial conditions the solutions of the full system decay exponentially quickly to a solution on the relatively low dimensional center manifold.
A third theorem, the approximation theorem, asserts that if an approximate expression for such invariant manifolds, say , satisfies the differential equation for the system to residuals as , then the invariant manifold is approximated by to an error of the same order, namely .
However, some applications, such as to dispersion in tubes or channels, require an infinite-dimensional center manifold. [9] The most general and powerful theory was developed by Aulbach and Wanner. [10] [11] [12] They addressed non-autonomous dynamical systems in infinite dimensions, with potentially infinite dimensional stable, unstable and center manifolds. Further, they usefully generalised the definition of the manifolds so that the center manifold is associated with eigenvalues such that , the stable manifold with eigenvalues , and unstable manifold with eigenvalues . They proved existence of these manifolds, and the emergence of a center manifold, via nonlinear coordinate transforms.
Potzsche and Rasmussen established a corresponding approximation theorem for such infinite dimensional, non-autonomous systems. [13]
All the extant theory mentioned above seeks to establish invariant manifold properties of a specific given problem. In particular, one constructs a manifold that approximates an invariant manifold of the given system. An alternative approach is to construct exact invariant manifolds for a system that approximates the given system---called a backwards theory. The aim is to usefully apply theory to a wider range of systems, and to estimate errors and sizes of domain of validity. [14] [15]
This approach is cognate to the well-established backward error analysis in numerical modeling.
As the stability of the equilibrium correlates with the "stability" of its manifolds, the existence of a center manifold brings up the question about the dynamics on the center manifold. This is analyzed by the center manifold reduction, which, in combination with some system parameter μ, leads to the concepts of bifurcations.
The Wikipedia entry on slow manifolds gives more examples.
Consider the system
The unstable manifold at the origin is the y axis, and the stable manifold is the trivial set {(0, 0)}. Any orbit not on the stable manifold satisfies an equation of the form for some real constant A. It follows that for any real A, we can create a center manifold by piecing together the curve for x > 0 with the negative x axis (including the origin). [16] Moreover, all center manifolds have this potential non-uniqueness, although often the non-uniqueness only occurs in unphysical complex values of the variables.
Another example shows how a center manifold models the Hopf bifurcation that occurs for parameter in the delay differential equation . Strictly, the delay makes this DE infinite-dimensional.
Fortunately, we may approximate such delays by the following trick that keeps the dimensionality finite. Define and approximate the time-delayed variable, , by using the intermediaries and .
For parameter near critical, , the delay differential equation is then approximated by the system
In terms of a complex amplitude and its complex conjugate , the center manifold is
and the evolution on the center manifold is
This evolution shows the origin is linearly unstable for , but the cubic nonlinearity then stabilises nearby limit cycles as in classic Hopf bifurcation.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.
The Rössler attractor is the attractor for the Rössler system, a system of three non-linear ordinary differential equations originally studied by Otto Rössler in the 1970s. These differential equations define a continuous-time dynamical system that exhibits chaotic dynamics associated with the fractal properties of the attractor. Rössler interpreted it as a formalization of a taffy-pulling machine.
In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space.
In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.
In the study of dynamical systems, a hyperbolic equilibrium point or hyperbolic fixed point is a fixed point that does not have any center manifolds. Near a hyperbolic point the orbits of a two-dimensional, non-dissipative system resemble hyperbolas. This fails to hold in general. Strogatz notes that "hyperbolic is an unfortunate name—it sounds like it should mean 'saddle point'—but it has become standard." Several properties hold about a neighborhood of a hyperbolic point, notably
Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.
In mathematics, the slow manifold of an equilibrium point of a dynamical system occurs as the most common example of a center manifold. One of the main methods of simplifying dynamical systems, is to reduce the dimension of the system to that of the slow manifold—center manifold theory rigorously justifies the modelling. For example, some global and regional models of the atmosphere or oceans resolve the so-called quasi-geostrophic flow dynamics on the slow manifold of the atmosphere/oceanic dynamics, and is thus crucial to forecasting with a climate model.
In mathematics, the spectral abscissa of a matrix or a bounded linear operator is the greatest real part of the matrix's spectrum. It is sometimes denoted . As a transformation , the spectral abscissa maps a square matrix onto its largest real eigenvalue.
Riemann invariants are mathematical transformations made on a system of conservation equations to make them more easily solvable. Riemann invariants are constant along the characteristic curves of the partial differential equations where they obtain the name invariant. They were first obtained by Bernhard Riemann in his work on plane waves in gas dynamics.
In mathematics, inertial manifolds are concerned with the long term behavior of the solutions of dissipative dynamical systems. Inertial manifolds are finite-dimensional, smooth, invariant manifolds that contain the global attractor and attract all solutions exponentially quickly. Since an inertial manifold is finite-dimensional even if the original system is infinite-dimensional, and because most of the dynamics for the system takes place on the inertial manifold, studying the dynamics on an inertial manifold produces a considerable simplification in the study of the dynamics of the original system.
In mathematics, and especially differential geometry and mathematical physics, gauge theory is the general study of connections on vector bundles, principal bundles, and fibre bundles. Gauge theory in mathematics should not be confused with the closely related concept of a gauge theory in physics, which is a field theory which admits gauge symmetry. In mathematics theory means a mathematical theory, encapsulating the general study of a collection of concepts or phenomena, whereas in the physical sense a gauge theory is a mathematical model of some natural phenomenon.
In dynamical systems, a spectral submanifold (SSM) is the unique smoothest invariant manifold serving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities. SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs in nonlinear dimensionality reduction.