Microscale and macroscale models

Last updated
Microscale and related macroscale models of coexistence in Phalaris arundinacea, a globally distributed grass. Each color represents the spatial extent of a distinct genotype in a microscale model using stochastic cellular automata. Each curve on the graph represents the population level of a corresponding genotype in a macroscale differential equation model. Coexistence-Phalaris-CA-ODE-1400619436.png
Microscale and related macroscale models of coexistence in Phalaris arundinacea, a globally distributed grass. Each color represents the spatial extent of a distinct genotype in a microscale model using stochastic cellular automata. Each curve on the graph represents the population level of a corresponding genotype in a macroscale differential equation model.

Microscale models form a broad class of computational models that simulate fine-scale details, in contrast with macroscale models, which amalgamate details into select categories. [2] [3] Microscale and macroscale models can be used together to understand different aspects of the same problem.

Contents

Applications

Macroscale models can include ordinary, partial, and integro-differential equations, where categories and flows between the categories determine the dynamics, or may involve only algebraic equations. An abstract macroscale model may be combined with more detailed microscale models. Connections between the two scales are related to multiscale modeling. One mathematical technique for multiscale modeling of nanomaterials is based upon the use of multiscale Green's function.

In contrast, microscale models can simulate a variety of details, such as individual bacteria in biofilms, [4] individual pedestrians in simulated neighborhoods, [5] individual light beams in ray-tracing imagery, [6] individual houses in cities, [7] fine-scale pores and fluid flow in batteries, [8] fine-scale compartments in meteorology, [9] fine-scale structures in particulate systems, [10] and other models where interactions among individuals and background conditions determine the dynamics.

Discrete-event models, individual-based models, and agent-based models are special cases of microscale models. However, microscale models do not require discrete individuals or discrete events. Fine details on topography, buildings, and trees can add microscale detail to meteorological simulations and can connect to what is called mesoscale models in that discipline. [9] Square-meter-sized landscape resolution available from lidar images allow water flow across land surfaces to be modeled, for example, rivulets and water pockets, using gigabyte-sized arrays of detail. [11] Models of neural networks may include individual neurons but may run in continuous time and thereby lack precise discrete events. [12]

History

Ideas for computational microscale models arose in the earliest days of computing and were applied to complex systems that could not accurately be described by standard mathematical forms.

Two themes emerged in the work of two founders of modern computation around the middle of the 20th century. First, pioneer Alan Turing used simplified macroscale models to understand the chemical basis of morphogenesis, but then proposed and used computational microscale models to understand the nonlinearities and other conditions that would arise in actual biological systems. [13] Second, pioneer John von Neumann created a cellular automaton to understand the possibilities for self-replication of arbitrarily complex entities, [14] which had a microscale representation in the cellular automaton but no simplified macroscale form. This second theme is taken to be part of agent-based models, where the entities ultimately can be artificially intelligent agents operating autonomously.

By the last quarter of the 20th century, computational capacity had grown so far [15] [16] that up to tens of thousands of individuals or more could be included in microscale models, and that sparse arrays could be applied to also achieve high performance. [17] Continued increases in computing capacity allowed hundreds of millions of individuals to be simulated on ordinary computers with microscale models by the early 21st century.

The term "microscale model" arose later in the 20th century and now appears in the literature of many branches of physical and biological science. [5] [7] [8] [9] [18]

Example

Figure 1 represents a fundamental macroscale model: population growth in an unlimited environment. Its equation is relevant elsewhere, such as compounding growth of capital in economics or exponential decay in physics. It has one amalgamated variable, , the number of individuals in the population at some time . It has an amalgamated parameter , the annual growth rate of the population, calculated as the difference between the annual birth rate and the annual death rate . Time can measured in years, as shown here for illustration, or in any other suitable unit.

The macroscale model of Figure 1 amalgamates parameters and incorporates a number of simplifying approximations:

  1. the birth and death rates are constant;
  2. all individuals are identical, with no genetics or age structure;
  3. fractions of individuals are meaningful;
  4. parameters are constant and do not evolve;
  5. habitat is perfectly uniform;
  6. no immigration or emigration occurs; and
  7. randomness does not enter.

These approximations of the macroscale model can all be refined in analogous microscale models. On the first approximation listed above—that birth and death rates are constant—the macroscale model of Figure 1 is exactly the mean of a large number of stochastic trials with the growth rate fluctuating randomly in each instance of time. [19] Microscale stochastic details are subsumed into a partial differential diffusion equation and that equation is used to establish the equivalence.

To relax other assumptions, researchers have applied computational methods. Figure 2 is a sample computational microscale algorithm that corresponds to the macroscale model of Figure 1. When all individuals are identical and mutations in birth and death rates are disabled, the microscale dynamics closely parallel the macroscale dynamics (Figures 3A and 3B). The slight differences between the two models arise from stochastic variations in the microscale version not present in the deterministic macroscale model. These variations will be different each time the algorithm is carried out, arising from intentional variations in random number sequences.

When not all individuals are identical, the microscale dynamics can differ significantly from the macroscale dynamics, simulating more realistic situations than can be modeled at the macroscale (Figures 3C and 3D). The microscale model does not explicitly incorporate the differential equation, though for large populations it simulates it closely. When individuals differ from one another, the system has a well-defined behavior but the differential equations governing that behavior are difficult to codify. The algorithm of Figure 2 is a basic example of what is called an equation-free model. [20]

When mutations are enabled in the microscale model (), the population grows more rapidly than in the macroscale model (Figures 3C and 3D). Mutations in parameters allow some individuals to have higher birth rates and others to have lower death rates, and those individuals contribute proportionally more to the population. All else being equal, the average birth rate drifts to higher values and the average death rate drifts to lower values as the simulation progresses. This drift is tracked in the data structures named beta and delta of the microscale algorithm of Figure 2.

The algorithm of Figure 2 is a simplified microscale model using the Euler method. Other algorithms such as the Gillespie method [21] and the discrete event method [17] are also used in practice. Versions of the algorithm in practical use include efficiencies such as removing individuals from consideration once they die (to reduce memory requirements and increase speed) and scheduling stochastic events into the future (to provide a continuous time scale and to further improve speed). [17] Such approaches can be orders of magnitude faster.

Complexity

The complexity of systems addressed by microscale models leads to complexity in the models themselves, and the specification of a microscale model can be tens or hundreds of times larger than its corresponding macroscale model. (The simplified example of Figure 2 has 25 times as many lines in its specification as does Figure 1.) Since bugs occur in computer software and cannot completely be removed by standard methods such as testing, [22] and since complex models often are neither published in detail nor peer-reviewed, their validity has been called into question. [23] Guidelines on best practices for microscale models exist [24] but no papers on the topic claim a full resolution of the problem of validating complex models.

Future

Computing capacity is reaching levels where populations of entire countries or even the entire world are within the reach of microscale models, and improvements in the census and travel data allow further improvements in parameterizing such models. Remote sensors from Earth-observing satellites and from ground-based observatories such as the National Ecological Observatory Network (NEON) provide large amounts of data for calibration. Potential applications range from predicting and reducing the spread of disease to helping understand the dynamics of the earth.

Figures

Figure 1. Macroscale equations MacroscaleModel-ExponentialGrowth-5769372.png
Figure 1. Macroscale equations

Figure 1.One of the simplest of macroscale models: an ordinary differential equation describing continuous exponential growth. is the size of the population at time , is the rate of change through time in the single dimension . is the initial population at , is a birth rate per time unit, and is a death rate per time unit. At the left is the differential form; at the right is the explicit solution in terms of standard mathematical functions, which follows in this case from the differential form. Almost all macroscale models are more complex than this example, in that they have multiple dimensions, lack explicit solutions in terms of standard mathematical functions, and must be understood from their differential forms.

Figure 2. Microscale algorithm corresponding to equations of Figure 1. Microscale Algorithm, Exponential Growth, 1396584021.png
Figure 2. Microscale algorithm corresponding to equations of Figure 1.

Figure 2.A basic algorithm applying the Euler method to an individual-based model. See text for discussion. The algorithm, represented in pseudocode, begins with invocation of procedure , which uses the data structures to carry out the simulation according to the numbered steps described at the right. It repeatedly invokes function , which returns its parameter perturbed by a random number drawn from a uniform distribution with standard deviation defined by the variable . (The square root of 12 appears because the standard deviation of a uniform distribution includes that factor.) Function in the algorithm is assumed to return a uniformly distributed random number . The data are assumed to be reset to their initial values on each invocation of .

Figure 3. Dynamics Microscale-Macroscale-ModelGraphs-ExponentialGrowth-1397621139.png
Figure 3. Dynamics

Figure 3.Graphical comparison of the dynamics of macroscale and microscale simulations of Figures 1 and 2, respectively.

(A) The black curve plots the exact solution to the macroscale model of Figure 1 with per year, per year, and individuals.
(B) Red dots show the dynamics of the microscale model of Figure 2, shown at intervals of one year, using the same values of , , and , and with no mutations .
(C) Blue dots show the dynamics of the microscale model with mutations having a standard deviation of .
(D) Green dots show results with larger mutations, .

Related Research Articles

In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid discrete–continuous dynamical systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function defined on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function defined on the integers then it is equivalent to the forward difference operator.

Lotka–Volterra equations Equations modelling predator–prey cycles

The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations:

Compartmental models are a very general modelling technique. They are often applied to the mathematical modelling of infectious diseases. The population is assigned to compartments with labels – for example, S, I, or R,. People may progress between compartments. The order of the labels usually shows the flow patterns between the compartments; for example SEIS means susceptible, exposed, infectious, then susceptible again.

Level-set method

Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. Also, the level-set method makes it very easy to follow shapes that change topology, for example, when a shape splits in two, develops holes, or the reverse of these operations. All these make the level-set method a great tool for modeling time-varying objects, like inflation of an airbag, or a drop of oil floating in water.

Hodgkin–Huxley model Describes how neurons transmit electric signals

The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical characteristics of excitable cells such as neurons and cardiac myocytes. It is a continuous-time dynamical system.

In probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others, presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power. As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of the position and behaviour of individual molecules is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.

The kinetic Monte Carlo (KMC) method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with known transition rates among states. It is important to understand that these rates are inputs to the KMC algorithm, the method itself cannot predict them.

A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.

Systems immunology is a research field under systems biology that uses mathematical approaches and computational methods to examine the interactions within cellular and molecular networks of the immune system. The immune system has been thoroughly analyzed as regards to its components and function by using a "reductionist" approach, but its overall function can't be easily predicted by studying the characteristics of its isolated components because they strongly rely on the interactions among these numerous constituents. It focuses on in silico experiments rather than in vivo.

In computational biology, a Cellular Potts model is a computational model of cells and tissues. It is used to simulate individual and collective cell behavior, tissue morphogenesis and cancer development. CPM describes cells as deformable objects with a certain volume, that can adhere to each other and to the medium in which they live. The formalism can be extended to include cell behaviours such as cell migration, growth and division, and cell signalling. The first CPM was proposed for the simulation of cell sorting by François Graner and James Glazier as a modification of a large-Q Potts model. CPM was then popularized by Paulien Hogeweg for studying morphogenesis. Although the model was developed to describe biological cells, it can also be used to model individual parts of a biological cell, or even regions of fluid.

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

The material point method (MPM) is a numerical technique used to simulate the behavior of solids, liquids, gases, and any other continuum material. Especially, it is a robust spatial discretization method for simulating multi-phase (solid-fluid-gas) interactions. In the MPM, a continuum body is described by a number of small Lagrangian elements referred to as 'material points'. These material points are surrounded by a background mesh/grid that is used to calculate terms such as the deformation gradient. Unlike other mesh-based methods like the finite element method, finite volume method or finite difference method, the MPM is not a mesh based method and is instead categorized as a meshless/meshfree or continuum-based particle method, examples of which are smoothed particle hydrodynamics and peridynamics. Despite the presence of a background mesh, the MPM does not encounter the drawbacks of mesh-based methods which makes it a promising and powerful tool in computational mechanics.

In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations. In the immersed boundary method the fluid is represented on an Eulerian coordinate and the structure is represented on a Lagrangian coordinate. For Newtonian fluids governed by the incompressible Navier–Stokes equations, the fluid equations are

The demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the original state, the excess energy is transferred to the demon. For a sampled state that has higher energy than desired, the demon provides the missing energy if it is available. The demon can not have negative energy and it does not interact with the particles beyond exchanging energy. Note that the additional degree of freedom of the demon does not alter a system with many particles significantly on a macroscopic level.

Equation-free modeling is a method for multiscale computation and computer-aided analysis. It is designed for a class of complicated systems in which one observes evolution at a macroscopic, coarse scale of interest, while accurate models are only given at a finely detailed, microscopic, level of description. The framework empowers one to perform macroscopic computational tasks using only appropriately initialized microscopic simulation on short time and small length scales. The methodology eliminates the derivation of explicit macroscopic evolution equations when these equations conceptually exist but are not available in closed form; hence the term equation-free.

Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.

Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states.

In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.

Hybrid stochastic simulations are a sub-class of stochastic simulations. These simulations combine existing stochastic simulations with other stochastic simulations or algorithms. Generally they are used for physics and physics-related research. The goal of a hybrid stochastic simulation varies based on context, however they typically aim to either improve accuracy or reduce computational complexity. The first hybrid stochastic simulation was developed in 1985.

The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike and to an outgoing spike. The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an a generalized integrate-and-fire model with adaptation.

References

  1. Nelson, Michael France (2014). Experimental and simulation studies of the population genetics, drought tolerance, and vegetative growth of Phalaris arundinacea (Doctoral Dissertation). University of Minnesota, USA.
  2. Gustafsson, Leif; Sternad, Mikael (2010). "Consistent micro, macro, and state-based population modelling". Mathematical Biosciences. 225 (2): 94–107. doi:10.1016/j.mbs.2010.02.003. PMID   20171974.
  3. Gustafsson, Leif; Sternad, Mikael (2007). "Bringing consistency to simulation of population models: Poisson Simulation as a bridge between micro and macro simulation" (PDF). Mathematical Biosciences. 209 (2): 361–385. doi:10.1016/j.mbs.2007.02.004. PMID   17412368.
  4. Dillon, Robert; Fauci, Lisa; Fogelson, Aaron; Gaver III, Donald (1996). "Modeling biofilm processes using the immersed boundary method". Journal of Computational Physics. 129 (1): 57–73. Bibcode:1996JCoPh.129...57D. doi:10.1006/jcph.1996.0233.
  5. 1 2 Bandini, Stefania; Luca Federici, Mizar; Manzoni, Sara (2007). "SCA approach to microscale modelling of paradigmatic emergent crowd behaviors". SCSC: 1051–1056.
  6. Gartley, M. G.; Schott, J. R.; Brown, S. D. (2008). Shen, Sylvia S; Lewis, Paul E (eds.). "Micro-scale modeling of contaminant effects on surface optical properties". Optical Engineering Plus Applications, International Society for Optics and Photonics. Imaging Spectrometry XIII. 7086: 70860H. Bibcode:2008SPIE.7086E..0HG. doi:10.1117/12.796428. S2CID   11788408.
  7. 1 2 O'Sullivan, David (2002). "Toward microscale spatial modeling of gentrification". Journal of Geographical Systems. 4 (3): 251–274. Bibcode:2002JGS.....4..251O. doi:10.1007/s101090200086. S2CID   6954911.
  8. 1 2 Less, G. B.; Seo, J. H.; Han, S.; Sastry, A. M.; Zausch, J.; Latz, A.; Schmidt, S.; Wieser, C.; Kehrwald, D.; Fell, S. (2012). "Microscale modeling of Li-Ion batteries: Parameterization and validation". Journal of the Electrochemical Society. 159 (6): A697–A704. doi:10.1149/2.096205jes.
  9. 1 2 3 Knutz, R.; Khatib, I.; Moussiopoulos, N. (2000). "Coupling of mesoscale and microscale models—an approach to simulate scale interaction". Environmental Modelling and Software. 15 (6–7): 597–602. doi:10.1016/s1364-8152(00)00055-4.
  10. Marchisio, Daniele L.; Fox, Rodney O. (2013). Computational models for polydisperse particulate and multiphase systems. Cambridge University Press.
  11. Barnes, Richard; Lehman, Clarence; Mulla, David (2014). "An efficient assignment of drainage direction over flat surfaces in raster digital elevation models". Computers and Geosciences. 62: 128–135. arXiv: 1511.04433 . Bibcode:2014CG.....62..128B. doi:10.1016/j.cageo.2013.01.009. S2CID   2155726.
  12. You, Yong; Nikolaou, Michael (1993). "Dynamic process modeling with recurrent neural networks". AIChE Journal. 39 (10): 1654–1667. doi:10.1002/aic.690391009.
  13. Turing, Alan M. (1952). "The chemical basis of morphogenesis". Philosophical Transactions of the Royal Society of London B: Biological Sciences. 237 (641): 37–72. Bibcode:1952RSPTB.237...37T. doi: 10.1098/rstb.1952.0012 .
  14. Burks, A. W. (1966). Theory of self-reproducing automata. University of Illinois Press.
  15. Moore, Gordon E. (1965). "Cramming more components onto integrated circuits". Electronics. 38 (8).
  16. Berezin, A. A.; Ibrahim, A. M. (2004). "Reliability of Moore's Law: A measure of maintained quality". In G. J. McNulty (ed.). Quality, Reliability and Maintenance. John Wiley and Sons.
  17. 1 2 3 Brown, Randy (1988). "Calendar Queues: A fast O(1) priority queue implementation for the simulation event set problem". Communications of the ACM. 31 (10): 1220–1227. doi:10.1145/63039.63045. S2CID   32086497.
  18. Frind, E. O.; Sudicky, E. A.; Schellenberg, S. L. (1987). "Microscale modelling in the study of plume evolution in heterogeneous media". Stochastic Hydrology and Hydraulics. 1 (4): 263–279. Bibcode:1987SHH.....1..263F. doi:10.1007/bf01543098. S2CID   198914966.
  19. May, Robert (1974). "Stability and complexity in model ecosystems". Monographs in Population Biology. Princeton University Press. 6: 114–117. PMID   4723571.
  20. Kevrekidis, Ioannis G.; Samaey, Giovanni (2009). "Equation-free multiscale computation: Algorithms and applications". Annual Review of Physical Chemistry. 60: 321–344. Bibcode:2009ARPC...60..321K. doi:10.1146/annurev.physchem.59.032607.093610. PMID   19335220.
  21. Gillespie, Daniel T. (1977). "Exact stochastic simulation of coupled chemical reactions". Journal of Physical Chemistry. 81 (25): 2340–2361. CiteSeerX   10.1.1.704.7634 . doi:10.1021/j100540a008.
  22. Dijkstra, Edsger (1970). Notes on structured programming. T.H. Report 70-WSK-03, EWD249. Eindhoven, The Netherlands: Technological University.
  23. Saltelli, Andrea; Funtowicz, Silvio (2014). "When all models are wrong". Issues in Science and Technology. 30 (2): 79–85.
  24. Baxter, Susan M.; Day, Steven W.; Fetrow, Jacquelyn S.; Reisinger, Stephanie J. (2006). "Scientific software development is not an oxymoron". PLOS Computational Biology. 2 (9): 975–978. Bibcode:2006PLSCB...2...87B. doi:10.1371/journal.pcbi.0020087. PMC   1560404 . PMID   16965174.