Contact process (mathematics)

Last updated
The contact process (on a 1-D lattice): Active sites are indicated by grey circles and inactive sites by dotted circles. Active sites can activate inactive sites to either side of them at a rate r/2 or become inactive at rate 1. Contact Process.svg
The contact process (on a 1-D lattice): Active sites are indicated by grey circles and inactive sites by dotted circles. Active sites can activate inactive sites to either side of them at a rate r/2 or become inactive at rate 1.

The contact process is a stochastic process used to model population growth on the set of sites of a graph in which occupied sites become vacant at a constant rate, while vacant sites become occupied at a rate proportional to the number of occupied neighboring sites. Therefore, if we denote by the proportionality constant, each site remains occupied for a random time period which is exponentially distributed parameter 1 and places descendants at every vacant neighboring site at times of events of a Poisson process parameter during this period. All processes are independent of one another and of the random period of time sites remains occupied. The contact process can also be interpreted as a model for the spread of an infection by thinking of particles as a bacterium spreading over individuals that are positioned at the sites of , occupied sites correspond to infected individuals, whereas vacant correspond to healthy ones.

Contents

The main quantity of interest is the number of particles in the process, say , in the first interpretation, which corresponds to the number of infected sites in the second one. Therefore, the process survives whenever the number of particles is positive for all times, which corresponds to the case that there are always infected individuals in the second one. For any infinite graph there exists a positive and finite critical value so that if then survival of the process starting from a finite number of particles occurs with positive probability, while if their extinction is almost certain. Note that by reductio ad absurdum and the infinite monkey theorem, survival of the process is equivalent to , as , whereas extinction is equivalent to , as , and therefore, it is natural to ask about the rate at which when the process survives.

Mathematical definition

If the state of the process at time is , then a site in is occupied, say by a particle, if and vacant if . The contact process is a continuous-time Markov process with state space , where is a finite or countable graph, usually , and a special case of an interacting particle system. More specifically, the dynamics of the basic contact process is defined by the following transition rates: at site ,

where the sum is over all the neighbors of in . This means that each site waits an exponential time with the corresponding rate, and then flips (so 0 becomes 1 and vice versa).

Connection to percolation

The contact process is a stochastic process that is closely connected to percolation theory. Ted Harris (1974) noted that the contact process on when infections and recoveries can occur only in discrete times corresponds to one-step-at-a-time bond percolation on the graph obtained by orienting each edge of in the direction of increasing coordinate-value.

The law of large numbers on the integers

A law of large numbers for the number of particles in the process on the integers informally means that for all large , is approximately equal to for some positive constant . Harris (1974) proved that, if the process survives, then the rate of growth of is at most and at least linear in time. A weak law of large numbers (that the process converges in probability) was shown by Durrett (1980). A few years later, Durrett and Griffeath (1983) improved this to a strong law of large numbers, giving almost sure convergence of the process.

Die out at criticality

For contact process on all integer lattices, a major breakthrough[ citation needed ] came in 1990 when Bezuidenhout and Grimmett showed that the contact process also dies out almost surely at the critical value.[ citation needed ]

Durrett's conjecture and the central limit theorem

Durrett conjectured in survey papers and lecture notes during the 1980s and early 1990s regarding the central limit theorem for the Harris contact process, viz. that, if the process survives, then for all large , equals and the error equals multiplied by a (random) error distributed according to a standard Gaussian distribution. [1] [2] [3]

Durrett's conjecture turned out to be correct for a different value of as proved in 2018. [4]

Related Research Articles

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics and mathematics, the Fourier transform (FT) is a transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into terms of the intensity of its constituent pitches.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In mathematics, real trees are a class of metric spaces generalising simplicial trees. They arise naturally in many mathematical contexts, in particular geometric group theory and probability theory. They are also the simplest examples of Gromov hyperbolic spaces.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’

<span class="mw-page-title-main">Generalized Pareto distribution</span> Family of probability distributions often used to model tails or extreme values

In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .

<span class="mw-page-title-main">Erdős–Rényi model</span> Two closely related models for generating random graphs

In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarian mathematicians Paul Erdős and Alfréd Rényi, who introduced one of the models in 1959. Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi. In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model, each edge has a fixed probability of being present or absent, independently of the other edges. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.

An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.

In actuarial science and applied probability, ruin theory uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

<span class="mw-page-title-main">Marchenko–Pastur distribution</span> Distribution of singular values of large rectangular random matrices

In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Soviet mathematicians Vladimir Marchenko and Leonid Pastur who proved this result in 1967.

<span class="mw-page-title-main">Circular law</span>

In probability theory, more specifically the study of random matrices, the circular law concerns the distribution of eigenvalues of an n × n random matrix with independent and identically distributed entries in the limit n → ∞.

In probability theory and statistics, the noncentral beta distribution is a continuous probability distribution that is a noncentral generalization of the (central) beta distribution.

<span class="mw-page-title-main">Voter model</span>

In the mathematical theory of probability, the voter model is an interacting particle system introduced by Richard A. Holley and Thomas M. Liggett in 1975.

In probability theory, an interacting particle system (IPS) is a stochastic process on some configuration space given by a site space, a countable-infinite graph and a local state space, a compact metric space . More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata.

Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983, as a universal tool to data compression, but recently have been used to model data in different areas such as biology, linguistics and music.

<span class="mw-page-title-main">Birth process</span> Type of continuous process in probability theory

In probability theory, a birth process or a pure birth process is a special case of a continuous-time Markov process and a generalisation of a Poisson process. It defines a continuous process which takes values in the natural numbers and can only increase by one or remain unchanged. This is a type of birth–death process with no deaths. The rate at which births occur is given by an exponential random variable whose parameter depends only on the current value of the process

References

  1. Durrett, Richard (1984). "Oriented Percolation in Two Dimensions Number". The Annals of Probability. 12 (4): 999–1040. doi: 10.1214/aop/1176993140 .
  2. Durrett, Richard. "Lecture Notes on Particle Systems and Percolation". Wadsworth.
  3. .Durrett, Richard. "The contact process, 1974–1989". Cornell University, Mathematical Sciences Institute.
  4. Tzioufas, Achillefs (2018). "The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions". Journal of Statistical Physics. 171 (5): 802–821. arXiv: 1411.4543 . Bibcode:2018JSP...171..802T. doi:10.1007/s10955-018-2040-y. S2CID   119174423.