Quasi-stationary distribution

Last updated

In probability a quasi-stationary distribution is a random process that admits one or several absorbing states that are reached almost surely, but is initially distributed such that it can evolve for a long time without reaching it. The most common example is the evolution of a population: the only equilibrium is when there is no one left, but if we model the number of people it is likely to remain stable for a long period of time before it eventually collapses.

Contents

Formal definition

We consider a Markov process taking values in . There is a measurable set of absorbing states and . We denote by the hitting time of , also called killing time. We denote by the family of distributions where has original condition . We assume that is almost surely reached, i.e. .

The general definition [1] is: a probability measure on is said to be a quasi-stationary distribution (QSD) if for every measurable set contained in ,

where .

In particular

General results

Killing time

From the assumptions above we know that the killing time is finite with probability 1. A stronger result than we can derive is that the killing time is exponentially distributed: [1] [2] if is a QSD then there exists such that .

Moreover, for any we get .

Existence of a quasi-stationary distribution

Most of the time the question asked is whether a QSD exists or not in a given framework. From the previous results we can derive a condition necessary to this existence.

Let . A necessary condition for the existence of a QSD is and we have the equality

Moreover, from the previous paragraph, if is a QSD then . As a consequence, if satisfies then there can be no QSD such that because other wise this would lead to the contradiction .

A sufficient condition for a QSD to exist is given considering the transition semigroup of the process before killing. Then, under the conditions that is a compact Hausdorff space and that preserves the set of continuous functions, i.e. , there exists a QSD.

History

The works of Wright on gene frequency in 1931 [3] and of Yaglom on branching processes in 1947 [4] already included the idea of such distributions. The term quasi-stationarity applied to biological systems was then used by Bartlett in 1957, [5] who later coined "quasi-stationary distribution". [6]

Quasi-stationary distributions were also part of the classification of killed processes given by Vere-Jones in 1962 [7] and their definition for finite state Markov chains was done in 1965 by Darroch and Seneta. [8]

Examples

Quasi-stationary distributions can be used to model the following processes:

Related Research Articles

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

<span class="mw-page-title-main">Consistent estimator</span> Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation.

<span class="mw-page-title-main">Rice distribution</span> Probability distribution

In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).

<span class="mw-page-title-main">Empirical distribution function</span> Distribution function associated with the empirical measure of a sample

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P << Q, and whose first moments exist, then

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

In probability theory, a subordinator is a stochastic process that is non-negative and whose increments are stationary and independent. Subordinators are a special class of Lévy process that play an important role in the theory of local time. In this context, subordinators describe the evolution of time within another stochastic process, the subordinated stochastic process. In other words, a subordinator will determine the random number of "time steps" that occur within the subordinated process for a given unit of chronological time.

In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.

In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations, defined in terms of linear functionals. It is attributed to Giuseppe Peano.

Poisson-type random measures are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under thinning. They are the only distributions in the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The PT family of distributions is also known as the Katz family of distributions, the Panjer or (a,b,0) class of distributions and may be retrieved through the Conway–Maxwell–Poisson distribution.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

References

  1. 1 2 Collet, Pierre; Martínez, Servet; San Martín, Jaime (2013). Quasi-Stationary Distributions | SpringerLink. Probability and its Applications. doi:10.1007/978-3-642-33131-2. ISBN   978-3-642-33130-5.
  2. Ferrari, Pablo A.; Martínez, Servet; Picco, Pierre (1992). "Existence of Non-Trivial Quasi-Stationary Distributions in the Birth-Death Chain". Advances in Applied Probability. 24 (4): 795–813. doi:10.2307/1427713. JSTOR   1427713. S2CID   17018407.
  3. WRIGHT, Sewall. Evolution in Mendelian populations. Genetics, 1931, vol. 16, no 2, pp. 97–159.
  4. YAGLOM, Akiva M. Certain limit theorems of the theory of branching random processes. In : Doklady Akad. Nauk SSSR (NS). 1947. p. 3.
  5. BARTLETT, Mi S. On theoretical models for competitive and predatory biological systems. Biometrika, 1957, vol. 44, no 1/2, pp. 27–42.
  6. BARTLETT, Maurice Stevenson. Stochastic population models; in ecology and epidemiology. 1960.
  7. VERE-JONES, D. (1962-01-01). "Geometric Ergodicity in Denumerable Markov Chains". The Quarterly Journal of Mathematics. 13 (1): 7–28. Bibcode:1962QJMat..13....7V. doi:10.1093/qmath/13.1.7. hdl: 10338.dmlcz/102037 . ISSN   0033-5606.
  8. Darroch, J. N.; Seneta, E. (1965). "On Quasi-Stationary Distributions in Absorbing Discrete-Time Finite Markov Chains". Journal of Applied Probability. 2 (1): 88–100. doi:10.2307/3211876. JSTOR   3211876. S2CID   67838782.