Empirical dynamic modeling (EDM) is a framework for analysis and prediction of nonlinear dynamical systems. Applications include population dynamics, [1] [2] [3] [4] [5] [6] ecosystem service, [7] medicine, [8] neuroscience, [9] [10] [11] dynamical systems, [12] [13] [14] geophysics, [15] [16] [17] and human-computer interaction. [18] EDM was originally developed by Robert May and George Sugihara. It can be considered a methodology for data modeling, predictive analytics, dynamical system analysis, machine learning and time series analysis.
Mathematical models have tremendous power to describe observations of real-world systems. They are routinely used to test hypothesis, explain mechanisms and predict future outcomes. However, real-world systems are often nonlinear and multidimensional, in some instances rendering explicit equation-based modeling problematic. Empirical models, which infer patterns and associations from the data instead of using hypothesized equations, represent a natural and flexible framework for modeling complex dynamics.
Donald DeAngelis and Simeon Yurek illustrated that canonical statistical models are ill-posed when applied to nonlinear dynamical systems. [19] A hallmark of nonlinear dynamics is state-dependence: system states are related to previous states governing transition from one state to another. EDM operates in this space, the multidimensional state-space of system dynamics rather than on one-dimensional observational time series. EDM does not presume relationships among states, for example, a functional dependence, but projects future states from localised, neighboring states. EDM is thus a state-space, nearest-neighbors paradigm where system dynamics are inferred from states derived from observational time series. This provides a model-free representation of the system naturally encompassing nonlinear dynamics.
A cornerstone of EDM is recognition that time series observed from a dynamical system can be transformed into higher-dimensional state-spaces by time-delay embedding with Takens's theorem. The state-space models are evaluated based on in-sample fidelity to observations, conventionally with Pearson correlation between predictions and observations.
EDM is continuing to evolve. As of 2022, the main algorithms are Simplex projection, [20] Sequential locally weighted global linear maps (S-Map) projection, [21] Multivariate embedding in Simplex or S-Map, [1] Convergent cross mapping (CCM), [22] and Multiview Embeding, [23] described below.
Parameter | Description |
---|---|
embedding dimension | |
number of nearest neighbors | |
prediction interval | |
observed time series | |
vector of lagged observations | |
S-Map localization | |
lagged embedding vectors | |
norm of v | |
list of nearest neighbors |
Nearest neighbors are found according to:
Simplex projection [20] [24] [25] [26] is a nearest neighbor projection. It locates the nearest neighbors to the location in the state-space from which a prediction is desired. To minimize the number of free parameters is typically set to defining an dimensional simplex in the state-space. The prediction is computed as the average of the weighted phase-space simplex projected points ahead. Each neighbor is weighted proportional to their distance to the projection origin vector in the state-space.
S-Map [21] extends the state-space prediction in Simplex from an average of the nearest neighbors to a linear regression fit to all neighbors, but localised with an exponential decay kernel. The exponential localisation function is , where is the neighbor distance and the mean distance. In this way, depending on the value of , neighbors close to the prediction origin point have a higher weight than those further from it, such that a local linear approximation to the nonlinear system is reasonable. This localisation ability allows one to identify an optimal local scale, in-effect quantifying the degree of state dependence, and hence nonlinearity of the system.
Another feature of S-Map is that for a properly fit model, the regression coefficients between variables have been shown to approximate the gradient (directional derivative) of variables along the manifold. [27] These Jacobians represent the time-varying interaction strengths between system variables.
Multivariate Embedding [1] [12] [28] recognizes that time-delay embeddings are not the only valid state-space construction. In Simplex and S-Map one can generate a state-space from observational vectors, or time-delay embeddings of a single observational time series, or both.
Convergent cross mapping (CCM) [22] leverages a corollary to the Generalized Takens Theorem [12] that it should be possible to cross predict or cross map between variables observed from the same system. Suppose that in some dynamical system involving variables and , causes . Since and belong to the same dynamical system, their reconstructions (via embeddings) , and , also map to the same system.
The causal variable leaves a signature on the affected variable , and consequently, the reconstructed states based on can be used to cross predict values of . CCM leverages this property to infer causality by predicting using the library of points (or vice versa for the other direction of causality), while assessing improvements in cross map predictability as larger and larger random samplings of are used. If the prediction skill of increases and saturates as the entire is used, this provides evidence that is casually influencing .
Multiview Embedding [23] is a Dimensionality reduction technique where a large number of state-space time series vectors are combitorially assessed towards maximal model predictability.
Extensions to EDM techniques include:
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state. A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by
Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.
Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment in time, the times at which the state of a dynamical system returns to the previous state at , i.e., when the phase space trajectory visits roughly the same area in the phase space as at time . In other words, it is a plot of
In the study of dynamical systems, a delay embedding theorem gives the conditions under which a chaotic dynamical system can be reconstructed from a sequence of observations of the state of that system. The reconstruction preserves the properties of the dynamical system that do not change under smooth coordinate changes, but it does not preserve the geometric shape of structures in phase space.
The competitive Lotka–Volterra equations are a simple model of the population dynamics of species competing for some common resource. They can be further generalised to the generalized Lotka–Volterra equation to include trophic interactions.
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.
A coupled map lattice (CML) is a dynamical system that models the behavior of nonlinear systems. They are predominantly used to qualitatively study the chaotic dynamics of spatially extended systems. This includes the dynamics of spatiotemporal chaos where the number of effective degrees of freedom diverges as the size of the system increases.
The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space. Such systems can experience bifurcation as a function of its bifurcation parameters and often exhibit chaos. Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation.
Convergent cross mapping (CCM) is a statistical test for a cause-and-effect relationship between two variables that, like the Granger causality test, seeks to resolve the problem that correlation does not imply causation. While Granger causality is best suited for purely stochastic systems where the influences of the causal variables are separable, CCM is based on the theory of dynamical systems and can be applied to systems where causal variables have synergistic effects. As such, CCM is specifically aimed to identify linkage between variables that can appear uncorrelated with each other.
t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by Geoffrey Hinton and Sam Roweis, where Laurens van der Maaten and Hinton proposed the t-distributed variant. It is a nonlinear dimensionality reduction technique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.
System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.
Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.
In dynamical systems, a spectral submanifold (SSM) is the unique smoothest invariant manifold serving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities. SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs in nonlinear dimensionality reduction.
Heteroclinic channels are ensembles of trajectories that can connect saddle equilibrium points in phase space. Dynamical systems and their associated phase spaces can be used to describe natural phenomena in mathematical terms; heteroclinic channels, and the cycles that they produce, are features in phase space that can be designed to occupy specific locations in that space. Heteroclinic channels move trajectories from one equilibrium point to another. More formally, a heteroclinic channel is a region in phase space in which nearby trajectories are drawn closer and closer to one unique limiting trajectory, the heteroclinic orbit. Equilibria connected by heteroclinic trajectories form heteroclinic cycles and cycles can be connected to form heteroclinic networks. Heteroclinic cycles and networks naturally appear in a number of applications, such as fluid dynamics, population dynamics, and neural dynamics. In addition, dynamical systems are often used as methods for robotic control. In particular, for robotic control, the equilibrium points can correspond to robotic states, and the heteroclinic channels can provide smooth methods for switching from state to state.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)