Asymmetric simple exclusion process

Last updated

In probability theory, the asymmetric simple exclusion process (ASEP) is an interacting particle system introduced in 1970 by Frank Spitzer. [1] Many articles have been published on it in the physics and mathematics literature since then, and it has become a "default stochastic model for transport phenomena". [2]

The process with parameters is a continuous-time Markov process on , the 1s being thought of as particles and the 0s as empty sites. Each particle waits a random amount of time having the distribution of an exponential random variable with mean one and then attempts a jump, one site to the right with probability and one site to the left with probability . However, the jump is performed only if there is no particle at the target site. Otherwise, nothing happens and the particle waits another exponential time. All particles are doing this independently of each other.

The model is related to the Kardar–Parisi–Zhang equation in the weakly asymmetric limit, i.e. when tends to zero under some particular scaling. Recently, progress has been made to understand the statistics of the current of particles and it appears that the Tracy–Widom distribution plays a key role.

Sources

  1. Spitzer, Frank (1970). "Interaction of Markov Processes". Advances in Mathematics . 5 (2): 246–290. doi: 10.1016/0001-8708(70)90034-4 .
  2. Yau, H.T. (2004). "(log t)^2/3 law of the two dimensional asymmetric simple exclusion process". Ann. Math. 159: 377–405. arXiv: math-ph/0201057 . doi:10.4007/annals.2004.159.377. S2CID   6691714.

Related Research Articles

<span class="mw-page-title-main">Stochastic process</span> Collection of random variables

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.

The de Broglie–Bohm theory, also known as the pilot wave theory, Bohmian mechanics, Bohm's interpretation, and the causal interpretation, is an interpretation of quantum mechanics. It postulates that in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992).

<span class="mw-page-title-main">Markov chain</span> Random process independent of past history

A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.

In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed.

A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases.

In probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others, presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power. As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.

<span class="mw-page-title-main">Contact process (mathematics)</span>

The contact process is a stochastic process used to model population growth on the set of sites of a graph in which occupied sites become vacant at a constant rate, while vacant sites become occupied at a rate proportional to the number of occupied neighboring sites. Therefore, if we denote by the proportionality constant, each site remains occupied for a random time period which is exponentially distributed parameter 1 and places descendants at every vacant neighboring site at times of events of a Poisson process parameter during this period. All processes are independent of one another and of the random period of time sites remains occupied. The contact process can also be interpreted as a model for the spread of an infection by thinking of particles as a bacterium spreading over individuals that are positioned at the sites of , occupied sites correspond to infected individuals, whereas vacant correspond to healthy ones.

The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.

Events are often triggered when a stochastic or random process first encounters a threshold. The threshold can be a barrier, boundary or specified state of a system. The amount of time required for a stochastic process, starting from some initial state, to encounter a threshold for the first time is referred to variously as a first hitting time. In statistics, first-hitting-time models are a sub-class of survival models. The first hitting time, also called first passage time, of the barrier set with respect to an instance of a stochastic process is the time until the stochastic process first enters .

<span class="mw-page-title-main">Harry Kesten</span> American mathematician (1931–2019)

Harry Kesten was a Jewish American mathematician best known for his work in probability, most notably on random walks on groups and graphs, random matrices, branching processes, and percolation theory.

<span class="mw-page-title-main">Tracy–Widom distribution</span> Probability distribution

The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by Craig Tracy and Harold Widom. It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.

In mathematics, the Kardar–Parisi–Zhang (KPZ) equation is a non-linear stochastic partial differential equation, introduced by Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang in 1986. It describes the temporal change of a height field with spatial coordinate and time coordinate :

In probability theory, an interacting particle system (IPS) is a stochastic process on some configuration space given by a site space, a countably-infinite-order graph and a local state space, a compact metric space . More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata.

In mathematics, a continuous-time random walk (CTRW) is a generalization of a random walk where the wandering particle waits for a random time between jumps. It is a stochastic jump process with arbitrary distributions of jump lengths and waiting times. More generally it can be seen to be a special case of a Markov renewal process.

Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.

<span class="mw-page-title-main">Jürgen Gärtner</span> German mathematician

Jürgen Gärtner is a German mathematician, specializing in probability theory and analysis.

In mathematical physics, two-dimensional Yang–Mills theory is the special case of Yang–Mills theory in which the dimension of spacetime is taken to be two. This special case allows for a rigorously defined Yang–Mills measure, meaning that the (Euclidean) path integral can be interpreted as a measure on the set of connections modulo gauge transformations. This situation contrasts with the four-dimensional case, where a rigorous construction of the theory as a measure is currently unknown.

References