Equation-free modeling

Last updated

Equation-free modeling is a method for multiscale computation and computer-aided analysis. It is designed for a class of complicated systems in which one observes evolution at a macroscopic, coarse scale of interest, while accurate models are only given at a finely detailed, microscopic, level of description. The framework empowers one to perform macroscopic computational tasks (over large space-time scales) using only appropriately initialized microscopic simulation on short time and small length scales. The methodology eliminates the derivation of explicit macroscopic evolution equations when these equations conceptually exist but are not available in closed form; hence the term equation-free. [1]

Contents

Introduction

In a wide range of chemical, physical and biological systems, coherent macroscopic behavior emerges from interactions between microscopic entities themselves (molecules, cells, grains, animals in a population, agents) and with their environment. Sometimes, remarkably, a coarse-scale differential equation model (such as the Navier-Stokes equations for fluid flow, or a reaction–diffusion system) can accurately describe macroscopic behavior. Such macroscale modeling makes use of general principles of conservation (atoms, particles, mass, momentum, energy), and closed into a well-posed system through phenomenological constitutive equations or equations of state. However, one increasingly encounters complex systems that only have known microscopic, fine scale, models. In such cases, although we observe the emergence of coarse-scale, macroscopic behavior, modeling it through explicit closure relations may be impossible or impractical. Non-Newtonian fluid flow, chemotaxis, porous media transport, epidemiology, brain modeling and neuronal systems are some typical examples. Equation-free modeling aims to use such microscale models to predict coarse macroscale emergent phenomena.

Performing coarse-scale computational tasks directly with fine-scale models is often infeasible: direct simulation over the full space-time domain of interest is often computationally prohibitive. Moreover, modeling tasks, such as numerical bifurcation analysis, are often impossible to perform on the fine-scale model directly: a coarse-scale steady state may not imply a steady state for the fine-scale system, since individual molecules or particles do not stop moving when the gas density or pressure become stationary. Equation-free modeling circumvents such problems by using short bursts of appropriately initialized fine-scale simulation, and, in spatial problems, on small well-separated patches of space. [2] [3] A free Matlab/Octave toolbox empowers people to use these equation-free methods. [4]

The coarse time-stepper

Dynamic problems invoke the coarse time-stepper. In essence, short bursts of computational experiments with the fine-scale simulator estimate local time derivatives. Given an initial condition for the coarse variables at time , the coarse time-stepper involves four steps:

Multiple time steps simulates the system into the macro-future. If the microscale model is stochastic, then an ensemble of microscale simulations may be needed to obtain sufficiently good extrapolation in the time step. Such a coarse time-stepper may be used in many algorithms of traditional continuum numerical analysis, such as numerical bifurcation analysis, optimization, control, and even accelerated coarse-scale simulation. For deterministic systems, the Matlab/Octave toolbox provides a user with higher-order accurate time-steppers: [4] a second- and fourth-order Runge--Kutta scheme, and a general interface scheme.

Traditionally, algebraic formulae determine time derivatives of the coarse model. In this approach, the macroscale derivative is estimated by the inner microscale simulator, in effect performing a closure on demand. A reason for the name equation-free is by analogy with matrix-free numerical linear algebra; [5] the name emphasizes that macro-level equations are never constructed explicitly in closed form.

Restriction

The restriction operator often follows directly from the specific choice of the macroscale variables. For example, when the microscale model evolves an ensemble of many particles, the restriction typically computes the first few moments of the particle distribution (the density, momentum, and energy).

Lifting

The lifting operator is usually much more involved. For example, consider a particle model: we need to define a mapping from a few low order moments of the particle distribution to initial conditions for each particle. The assumption that a relation exists that closes in these low order, coarse, moments, implies that the detailed microscale configurations are functionals of the moments (sometimes referred to as slaving [6] ). We assume this relationship is established/emerges on time scales that are fast compared to the overall system evolution (see slow manifold theory and applications [7] ). Unfortunately, the closure (slaving relations) are algebraically unknown (as otherwise the coarse evolution law would be known).

Initializing the unknown microscale modes randomly introduces a lifting error: we rely on the separation of macro and micro time scales to ensure a quick relaxation to functionals of the coarse macrostates (healing). A preparatory step may be required, possibly involving microscale simulations constrained to keep the macrostates fixed. [8] When the system has a unique fixed point for the unknown microscale details conditioned upon the coarse macrostates, a constrained runs algorithm may perform this preparatory step using only the microscale time-stepper. [9]

An illustrative example

A toy problem illustrates the basic concepts. For example, consider the differential equation system for two variables :

Capital denotes the presumed macroscale variable, and lowercase the microscale variable. This classification means that we assume a coarse model of the form exists, although we do not necessarily know what it is. Arbitrarily define the lifting from any given macrostate as . A simulation using this lifting and the coarse time-stepper is shown in the figure.

Equation-free coarse time-stepper applied to the illustrative example differential equation system using
d
t
=
0.1
{\displaystyle \delta t=0.1}
and
U
(
0
)
=
-
1
{\displaystyle U(0)=-1}
. EquationFreeProjectiveIntegration.png
Equation-free coarse time-stepper applied to the illustrative example differential equation system using and .

The solution of the differential equation rapidly moves to the slow manifold for any initial data. The coarse time-stepper solution would agree better with the full solution when the 100 factor is increased. The graph shows the lifted solution (blue solid line) . At times , the solution is restricted and then lifted again, which here is simply setting . The slow manifold is shown as a red line. The right plot shows the time derivative of the restricted solution as a function of time (blue curve), as well as the time derivative (the coarse time derivative), as observed from a full simulation (red curve).

On application to concrete multiscale problems

The equation-free approach has been applied to many examples. The examples illustrate the various ways to construct and assemble the algorithmic building blocks. Numerical analysis establishes the accuracy and efficiency of this approach. Additional numerical analysis on other methods of this type has also been done. [10]

Applying the equation-free paradigm to a real problem requires considerable care, especially defining the lifting and restriction operators, and the appropriate outer solver.

Coarse bifurcation analysis

The recursive projection method [14] enables the computation of bifurcation diagrams using legacy simulation code. It also empowers the coarse time-stepper to perform equation-free bifurcation computations. Consider the coarse time stepper in its effective form

which includes explicit dependence upon one or more parameters . Bifurcation analysis computes equilibria or periodic orbits, their stability and dependence upon parameter .

Compute a coarse equilibrium as a fixed point of the coarse time stepper

In the equation-free context, the recursive projection method is the outer solver of this equation, and the coarse time-stepper enables this method to be performed using fine scale dynamics.

Additionally, for problems where the macroscale has continuous symmetries, one can use a template based approach [15] to compute coarse self-similar or travelling wave solutions as fixed points of a coarse time-stepper that also encodes appropriate rescaling and/or shifting of space-time and/or solution. For example, self-similar diffusion solutions may be found as the probability density function of detailed molecular dynamics. [16]

An alternative to the recursive projection method is to use Newton—Krylov methods. [17]

Coarse projective integration

The coarse time stepper accelerates simulation over large macroscale times. In the scheme described above, let the large macro-time-step , and be on the time scale of the slow coarse dynamics. Let the computed in terms of the coarse variable, and let the microscale simulation compute from a local time simulation with initial condition that the coarse variable . Then we approximate via extrapolating over a gap by

where, for example, simple linear extrapolation would be

This scheme is the called coarse projective forward Euler, and is the simplest in the class.

The steps taken before the extrapolation reflect that we must allow the system to settle onto a quasi-equilibrium (from the microscale point of view), so that we can make a reliable extrapolation of the slow dynamics. Then the size of the projective integration step is limited by stability of the slow modes. [18]

Higher order versions of coarse projective integration can be formed, analogous to Adams–Bashforth or Runge–Kutta. [19] Higher order schemes for systems where the microscale noise is still apparent on the macroscale time step are more problematic. [20]

Patch dynamics

The spatial analogue of projective integration is the gap-tooth scheme. The idea of the gap-tooth scheme is to perform simulations of small patches of space, the teeth, separated by unsimulated space, the gaps. By appropriately coupling the small patches of simulations we create a large scale, coarse level, simulation of the spatially extended system. When the microscale simulator is computationally expensive the gap-tooth scheme empowers efficient large scale prediction. Furthermore, it does this without us ever having to identify an algebraic closure for a large scale model. [21] [22] [23] The Matlab/Octave toolbox provides support to users to implement simulations on an rectangular grid of patches in 1D or 2D space. [4]

The combination of the gap-tooth scheme with coarse projective integration is called patch dynamics.

Coupling boundary conditions

The key to the gap-tooth and patch scheme is the coupling of the small patches across unsimulated space. Surprisingly, the generic answer is to simply use classic Lagrange interpolation, whether in one dimension [23] or multiple dimensions. [24] This answer is related to the coupling in holistic discretization and theoretical support provided by the theory of slow manifolds. The interpolation provides value or flux boundary conditions as required by the microscale simulator. High order consistency between the macroscale gap-tooth/patch scheme and the microscale simulation is achieved through high order Lagrange interpolation.

However, commonly the microscale is a noisy particle based or agent-based model. In such cases the relevant macroscale variables are averages such as mass and momentum density. Then one generally has to form averages over a core of each tooth/patch, and apply the coupling condition over a finite action region on the edges of each tooth/patch. The provisional recommendation is to make these regions as big as half the tooth/patch. [25] That is, for efficiency one makes the microscale tooth/patch as small as possible, but limited by the need to fit in action and core regions big enough to form accurate enough averages.

Lifting

Patch dynamics is the combination of the gap-tooth scheme and coarse projective integration. Just as for normal projective integration, at the start of each burst of microscale simulation, one has to create an initial condition for each patch that is consistent with the local macroscale variables, and the macroscale gradients from neighboring interpolated patches. The same techniques suffice.

Open problems and future directions

Assumptions and choices about the macroscale evolution are crucial in the equation-free scheme. The key assumption is that the variables we choose for the macroscale coupling must effectively close on the chosen macroscale. If the chosen macroscale length is too small then more coarse scale variables may be needed: for example, in fluid dynamics we conventionally close the PDEs for density, momentum and energy; yet in high speed flow especially at lower densities we need to resolve modes of molecular vibration because they have not equilibrated on the time scales of the fluid flow. Qualitatively the same considerations apply to the equation-free approach.

For many systems appropriate coarse variables are more-or-less known by experience. However, in complex situations there is a need to automatically detect the appropriate coarse variables, and then use them in the macroscale evolution. This needs much more research utilizing techniques from data mining and manifold learning. In some problems it could be that as well as densities, the appropriate coarse variables also need to include spatial correlations, as in the so-called Brownian bugs. [26]

The macroscale may have to be treated as a stochastic system, but then the errors are likely to be much larger and the closures more uncertain.

Related Research Articles

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

<span class="mw-page-title-main">Large eddy simulation</span>

Large eddy simulation (LES) is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and first explored by Deardorff (1970). LES is currently applied in a wide variety of engineering applications, including combustion, acoustics, and simulations of the atmospheric boundary layer.

In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.

A direct numerical simulation (DNS) is a simulation in computational fluid dynamics (CFD) in which the Navier–Stokes equations are numerically solved without any turbulence model. This means that the whole range of spatial and temporal scales of the turbulence must be resolved. All the spatial scales of the turbulence must be resolved in the computational mesh, from the smallest dissipative scales, up to the integral scale , associated with the motions containing most of the kinetic energy. The Kolmogorov scale, , is given by

In physical cosmology, cosmological perturbation theory is the theory by which the evolution of structure is understood in the Big Bang model. Cosmological perturbation theory may be broken into two categories: Newtonian or general relativistic. Each case uses its governing equations to compute gravitational and pressure forces which cause small perturbations to grow and eventually seed the formation of stars, quasars, galaxies and clusters. Both cases apply only to situations where the universe is predominantly homogeneous, such as during cosmic inflation and large parts of the Big Bang. The universe is believed to still be homogeneous enough that the theory is a good approximation on the largest scales, but on smaller scales more involved techniques, such as N-body simulations, must be used. When deciding whether to use general relativity for perturbation theory, note that Newtonian physics is only applicable in some cases such as for scales smaller than the Hubble horizon, where spacetime is sufficiently flat, and for which speeds are non-relativistic.

<span class="mw-page-title-main">Lattice Boltzmann methods</span> Class of computational fluid dynamics methods

The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.

The kinetic Monte Carlo (KMC) method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with known transition rates among states. It is important to understand that these rates are inputs to the KMC algorithm, the method itself cannot predict them.

In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval are discretized, or broken into a finite number of steps, and the value of the solution at these discrete points is approximated by solving algebraic equations containing finite differences and values from nearby points.

In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs.

Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.

In numerical analysis, von Neumann stability analysis is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations. The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers Crank and Nicolson. This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article co-authored by John von Neumann.

The quantized state systems (QSS) methods are a family of numerical integration solvers based on the idea of state quantization, dual to the traditional idea of time discretization. Unlike traditional numerical solution methods, which approach the problem by discretizing time and solving for the next (real-valued) state at each successive time step, QSS methods keep time as a continuous entity and instead quantize the system's state, instead solving for the time at which the state deviates from its quantized value by a quantum.

<span class="mw-page-title-main">Linear seismic inversion</span> Interpretation of seismic data using linear model

Inverse modeling is a mathematical technique where the objective is to determine the physical properties of the subsurface of an earth region that has produced a given seismogram. Cooke and Schneider (1983) defined it as calculation of the earth's structure and physical parameters from some set of observed seismic data. The underlying assumption in this method is that the collected seismic data are from an earth structure that matches the cross-section computed from the inversion algorithm. Some common earth properties that are inverted for include acoustic velocity, formation and fluid densities, acoustic impedance, Poisson's ratio, formation compressibility, shear rigidity, porosity, and fluid saturation.

<span class="mw-page-title-main">Microscale and macroscale models</span> Classes of computational models

Microscale models form a broad class of computational models that simulate fine-scale details, in contrast with macroscale models, which amalgamate details into select categories. Microscale and macroscale models can be used together to understand different aspects of the same problem.

<span class="mw-page-title-main">Parareal</span> Parallel algorithm from numerical analysis

Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems. It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integration methods.

Coarse-grained modeling, coarse-grained models, aim at simulating the behaviour of complex systems using their coarse-grained (simplified) representation. Coarse-grained models are widely used for molecular modeling of biomolecules at various granularity levels.

Multiscale Green's function (MSGF) is a generalized and extended version of the classical Green's function (GF) technique for solving mathematical equations. The main application of the MSGF technique is in modeling of nanomaterials. These materials are very small – of the size of few nanometers. Mathematical modeling of nanomaterials requires special techniques and is now recognized to be an independent branch of science. A mathematical model is needed to calculate the displacements of atoms in a crystal in response to an applied static or time dependent force in order to study the mechanical and physical properties of nanomaterials. One specific requirement of a model for nanomaterials is that the model needs to be multiscale and provide seamless linking of different length scales.

The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena. The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces.

Phase-field models on graphs are a discrete analogue to phase-field models, defined on a graph. They are used in image analysis and for the segmentation of social networks.

Hybrid stochastic simulations are a sub-class of stochastic simulations. These simulations combine existing stochastic simulations with other stochastic simulations or algorithms. Generally they are used for physics and physics-related research. The goal of a hybrid stochastic simulation varies based on context, however they typically aim to either improve accuracy or reduce computational complexity. The first hybrid stochastic simulation was developed in 1985.

References

  1. Kevrekidis, I.G.; Samaey, G. (2009), "Equation-free multiscale computation: algorithms and applications", Annual Review of Physical Chemistry, 60: 321–344, Bibcode:2009ARPC...60..321K, doi:10.1146/annurev.physchem.59.032607.093610, PMID   19335220
  2. Kevrekidis, I.G.; et al. (2003), "Equation-free, coarse-grained multiscale computation: enabling microscopic simulators to perform system-level tasks", Comm. Math. Sciences, 1 (4): 715–762, doi: 10.4310/CMS.2003.v1.n4.a5 , MR   2041455
  3. Kevrekidis, I.G. and Samaey, Giovanni (2009), "Equation-Free Multiscale Computation: Algorithms and Applications", Annu. Rev. Phys. Chem., 60: 321–44, Bibcode:2009ARPC...60..321K, doi:10.1146/annurev.physchem.59.032607.093610, PMID   19335220 {{citation}}: CS1 maint: multiple names: authors list (link)
  4. 1 2 3 A.J. Roberts and John Maclean and J.E. Bunder (2019), Equation-Free function toolbox for Matlab/Octave
  5. C. T. Kelley. Iterative Methods for linear and nonlinear equations, SIAM, Philadelphia, 1995.
  6. H. Haken. Slaving principle revisited. Physica D, 97:95–103, 1996.
  7. A. J. Roberts. Effectively model dynamics, deterministic and stochastic, across multiple space and time scales. In J. G. Hartnett and P. C. Abbott, editors, Frontiers of Fundamental and Computational Physics: 10th International Symposium, volume 1246, pages 75–87. AIP, 2010.
  8. J. P. Ryckaert, G. Ciccotti, and H. Berendsen. Numerical integration of the Cartesian equation of motion of a system with constraints: molecular dynamics of N-alkanes. J. Comput. Phys., 23:237, 1977.
  9. C. W. Gear, T. J. Kaper, I. G. Kevrekidis, and A. Zagaris. Projecting to a Slow Manifold: Singularly Perturbed Systems and Legacy Codes. SIAM Journal on Applied Dynamical Systems 4(3):711–732, 2005.
  10. W. E and B. Engquist (2003). The heterogeneous multiscale methods Comm. Math. Sciences 1(1):87–132.
  11. W. R. Young, A. J. Roberts, and G. Stuhne. Reproductive pair correlations and the clustering of organisms. Nature, 412:328–331, 2001.
  12. R. R. Coifman et al. (2005). Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps Proceedings of the National Academy of Sciences 102(21):7426–7431.
  13. J. Li, P. G. Kevrekidis, C. W. Gear and I. G. Kevrekidis (2003). Deciding the nature of the coarse equation through microscopic simulations: the baby-bathwater scheme SIAM Multiscale Modeling and Simulation 1(3):391–407.
  14. G.M. Schroff and H.B. Keller (1993). Stabilization of unstable procedures: the recursive projection method SIAM Journal on Numerical Analysis 30: 1099–1120.
  15. C. Rowley and J. Marsden (2000). Reconstruction equations and the Karhunen–Loève expansion for systems with symmetry Physica D: Nonlinear Phenomena 142: 1–19.
  16. L. Chen, P. Debenedetti, C.W. Gear, and I.G. Kevrekidis (2004). From molecular dynamics to coarse self-similar solutions: a simple example using equation-free computation Journal of Non-Newtonian Fluid Mechanics 120: 215–223.
  17. C.T. Kelley (1995). Iterative Methods for linear and nonlinear equations SIAM, Philadelphia.
  18. C.W. Gear and I.G. Kevrekidis. Projective methods for stiff differential equations: problems with gaps in their eigenvalue spectrum . SIAM Journal on Scientific Computing 24(4):1091–1106, 2003.
  19. C.W. Gear; I.G. Kevrekidis and Theodoropoulos. Coarse integration/bifurcation analysis via microscopic simulators: micro-Galerkin methods Computers and Chemical Engineering 26: 941–963, 2002.
  20. X. Chen, A. J. Roberts, and I. G. Kevrekidis. Projective integration of expensive multiscale stochastic simulation. In W. McLean and A. J. Roberts, editors, Proceedings of the 15th Biennial Computational Techniques and Applications Conference, CTAC-2010, volume 52 of ANZIAM J., pages C661–C677, Aug. 2011. http://journal.austms.org.au/ojs/ index.php/ANZIAMJ/article/view/3764
  21. Kevrekidis, I.G. et al. (2003). Equation-free, coarse-grained multiscale computation: enabling microscopic simulators to perform system-level tasks Comm. Math. Sciences 1(4): 715–762.
  22. Samaey, G.; Roose, D. and Kevrekidis, I.G. (2005). The gap-tooth scheme for homogenization problems SIAM Multiscale Modeling and Simulation 4: 278–306.
  23. 1 2 Roberts, A.J. and Kevrekidis, I.G. (2007). General tooth boundary conditions for equation free modelling SIAM J. Scientific Computing 29(4): 1495–1510.
  24. A. J. Roberts, T. MacKenzie, and J. Bunder. A dynamical systems approach to simulating macroscale spatial dynamics in multiple dimensions. J. Engineering Mathematics, 86(1):175–207, 2014.
  25. Bunder, J. E., A. J. Roberts, and I. G. Kevrekidis (2017). “Good coupling for the multiscale patch scheme on systems with microscale heterogeneity”. In: J. Computational Physics 337, pp. 154–174.
  26. W. R. Young, A. J. Roberts, and G. Stuhne. Reproductive pair correlations and the clustering of organisms. Nature, 412:328–331, 2001.