Hybrid stochastic simulations are a sub-class of stochastic simulations. These simulations combine existing stochastic simulations with other stochastic simulations or algorithms. Generally they are used for physics and physics-related research. The goal of a hybrid stochastic simulation varies based on context, however they typically aim to either improve accuracy or reduce computational complexity. The first hybrid stochastic simulation was developed in 1985. [1]
The first hybrid stochastic simulation was developed by Simon Duane at the University of Illinois at Urbana-Champaign in 1985. [1] It combined the Langevin equation with microcanonical ensembles. Duane's hybrid stochastic simulation was based upon the idea that the two algorithms complemented each other. The Langevin equation excelled at simulating long-time properties, but the addition of noise into the system created inefficient exploration of short-time properties. [2] The microcanonical ensemble approach meanwhile excelled at exploring short-time properties, but became less reliable for long-time properties. By combining the two methods, the weakness of each could be mitigated by the strength of the other. Duane's initial results using this hybrid stochastic simulation were positive when the model correctly supported the idea of an abrupt finite-temperature transition in quantum chromodynamics, which was an controversial subject at the time.
Since then many hybrid stochastic simulations have been developed, aiming to overcome deficiencies in the stochastic simulations that they were based upon.
The Dobramysl and Holcman mixed analytical-stochastic simulation model was published in 2018 by Ulrich Dobramysl and David Holcman, from the University of Cambridge and University of Oxford respectively. [3] [4] It simulates parts of Brownian trajectories, instead of simulating the entire trajectory. This approach is particularly relevant when a Brownian particle evolves in an infinite space. Trajectories are then simulated only in the neighborhood of small targets. Otherwise, explicit analytical expressions are used to map the initial point to a distribution located on an imaginary surface around the targets. This method has many possible applications, including generating gradient cues in an open space and simulating the diffusion of molecules that have to bind to cell receptors.
The algorithm avoids the explicit simulation long trajectories with large excursions and thus it circumvents the need for an arbitrary cutoff distance for the infinite domain. The algorithm consists of mapping the source position to a half-sphere containing the absorbing windows. Inside the sphere, classical Brownian simulations are run until the particle is absorbed or exits through the sphere surface. The algorithm consists of the following steps:
One can map the source for a ball in 3D to get the first passage probability for hitting a ball before escaping to infinity. The mapping is as follows:
, with and
The probability distribution of hitting is obtained by normalizing the integral of the flux.
The choice of the radius R is arbitrary as long as the sphere S(R) encloses all windows with a buffer of at least size . The radius R' should be chosen such that frequent re-crossings are avoided, e.g. This algorithm can be used to simulate trajectories of Brownian particles at steady-state close to a region of interest. Note that there is no approximation involved.
The Two-Regime Method for reaction–diffusion simulations was created by Mark Flegg, Jonathan Chapman and Radek Erban at the University of Oxford. [5] It combines molecular-based algorithms with compartment-based approaches at ideal points during calculations to reduce computational cost. The molecular-based algorithms are great at giving highly accurate detail on localized regions of interest. Compartment-based models excel at efficient simulations of large regions. The main use for this model is to increase both the speed and accuracy of reaction–diffusion simulations, and provide more control to the simulator over methods to characterize regions of interest.
The Two-Regime Method works by having two regimes of interest. One region is event-based and primarily uses compartment-based approaches, while the other region is time-based and relies on molecular-based regimes. The steps of the algorithm are as follows:
Molecules jump between compartments while in region with the chance of jumping into where movement will then be simulated using Brownian motion. Many possibilities exist to couple these regions, which can vary based on the purpose of the simulation.
This algorithm and ones built upon it are used to study the conversion of species. They can also be coupled with the Fokker-Planck equation to simulate population and single trajectories using Brownian simulations. [6]
Hybrid stochastic simulations have been used to:
Brownian motion is the random motion of particles suspended in a medium.
In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a sequence of random variables in a probability space, where the index of the sequence often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
In mathematics, a random walk, sometimes known as a drunkard's walk, is a random process that describes a path that consists of a succession of random steps on some mathematical space.
Dynamical simulation, in computational physics, is the simulation of systems of objects that are free to move, usually in three dimensions according to Newton's laws of dynamics, or approximations thereof. Dynamical simulation is used in computer animation to assist animators to produce realistic motion, in industrial design, and in video games. Body movement is calculated using time integration methods.
In science, Brownian noise, also known as Brown noise or red noise, is the type of signal noise produced by Brownian motion, hence its alternative name of random walk noise. The term "Brown noise" does not come from the color, but after Robert Brown, who documented the erratic motion for multiple types of inanimate particles in water. The term "red noise" comes from the "white noise"/"white light" analogy; red noise is strong in longer wavelengths, similar to the red end of the visible spectrum.
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
In probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others, presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power. As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.
In mathematics, the law of a stochastic process is the measure that the process induces on the collection of functions from the index set into the state space. The law encodes a lot of information about the process; in the case of a random walk, for example, the law is the probability distribution of the possible trajectories of the walk.
In mathematics, classical Wiener space is the collection of all continuous functions on a given domain, taking values in a metric space. Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener.
An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.
In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.
In mathematics, especially potential theory, harmonic measure is a concept related to the theory of harmonic functions that arises from the solution of the classical Dirichlet problem.
In mathematics, the Freidlin–Wentzell theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.
The Monte Carlo method for electron transport is a semiclassical Monte Carlo (MC) approach of modeling semiconductor transport. Assuming the carrier motion consists of free flights interrupted by scattering mechanisms, a computer is utilized to simulate the trajectories of particles as they move across the device under the influence of an electric field using classical mechanics. The scattering events and the duration of particle flight is determined through the use of random numbers.
In probability theory, reflected Brownian motion is a Wiener process in a space with reflecting boundaries. In the physical literature, this process describes diffusion in a confined space and it is often called confined Brownian motion. For example it can describe the motion of hard spheres in water confined between two walls.
In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.
In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988.
The redundancy principle in biology expresses the need of many copies of the same entity to fulfill a biological function. Examples are numerous: disproportionate numbers of spermatozoa during fertilization compared to one egg, large number of neurotransmitters released during neuronal communication compared to the number of receptors, large numbers of released calcium ions during transient in cells, and many more in molecular and cellular transduction or gene activation and cell signaling. This redundancy is particularly relevant when the sites of activation are physically separated from the initial position of the molecular messengers. The redundancy is often generated for the purpose of resolving the time constraint of fast-activating pathways. It can be expressed in terms of the theory of extreme statistics to determine its laws and quantify how the shortest paths are selected. The main goal is to estimate these large numbers from physical principles and mathematical derivations.
Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule.