Moran process

Last updated

A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958. [1] It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection. The process can describe the probabilistic dynamics in a finite population of constant size N in which two alleles A and B are competing for dominance. The two alleles are considered to be true replicators (i.e. entities that make copies of themselves).

Contents

In each time step a random individual (which is of either type A or B) is chosen for reproduction and a random individual is chosen for death; thus ensuring that the population size remains constant. To model selection, one type has to have a higher fitness and is thus more likely to be chosen for reproduction. The same individual can be chosen for death and for reproduction in the same step.

Neutral drift

Neutral drift is the idea that a neutral mutation can spread throughout a population, so that eventually the original allele is lost. A neutral mutation does not bring any fitness advantage or disadvantage to its bearer. The simple case of the Moran process can describe this phenomenon.

The Moran process is defined on the state space i = 0, ..., N which count the number of A individuals. Since the number of A individuals can change at most by one at each time step, a transition exists only between state i and state i − 1, i and i + 1. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

The entry denotes the probability to go from state i to state j. To understand the formulas for the transition probabilities one has to look at the definition of the process which states that always one individual will be chosen for reproduction and one is chosen for death. Once the A individuals have died out, they will never be reintroduced into the population since the process does not model mutations (A cannot be reintroduced into the population once it has died out and vice versa) and thus . For the same reason the population of A individuals will always stay N once they have reached that number and taken over the population and thus . The states 0 and N are called absorbing while the states 1, ..., N − 1 are called transient. The intermediate transition probabilities can be explained by considering the first term to be the probability to choose the individual whose abundance will increase by one and the second term the probability to choose the other type for death. Obviously, if the same type is chosen for reproduction and for death, then the abundance of one type does not change.

Eventually the population will reach one of the absorbing states and then stay there forever. In the transient states, random fluctuations will occur but eventually the population of A will either go extinct or reach fixation. This is one of the most important differences to deterministic processes which cannot model random events. The expected value and the variance of the number of A individuals X(t) at timepoint t can be computed when an initial state X(0) = i is given:

For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows. Writing p = i/N,

Writing and , and applying the law of total expectation, Applying the argument repeatedly gives or

For the variance the calculation runs as follows. Writing we have

For all t, and are identically distributed, so their variances are equal. Writing as before and , and applying the law of total variance,

If , we obtain

Rewriting this equation as

yields

as desired.


The probability that A reaches fixation is called fixation probability. For the simple Moran process this probability is xi = i/N.

Since all individuals have the same fitness, they also have the same chance of becoming the ancestor of the whole population; this probability is 1/N and thus the sum of all i probabilities (for all A individuals) is just i/N. The mean time to absorption starting in state i is given by

For a mathematical derivation of the equation above, click on "show" to reveal

The mean time spent in state j when starting in state i which is given by

Here δij denotes the Kroenecker delta. This recursive equation can be solved using a new variable qi so that and thus and rewritten

The variable is used and the equation becomes

Knowing that and

we can calculate :

Therefore

with . Now ki, the total time until fixation starting from state i, can be calculated


For large N the approximation

holds.

Selection

If one allele has a fitness advantage over the other allele, it will be more likely to be chosen for reproduction. This can be incorporated into the model if individuals with allele A have fitness and individuals with allele B have fitness where is the number of individuals of type A; thus describing a general birth-death process. The transition matrix of the stochastic process is tri-diagonal in shape. Let , then the transition probabilities are

The entry denotes the probability to go from state i to state j. The difference to neutral selection above is now that a mutant, i.e. an individual with allele B, is selected for procreation with probability

and an individual with allele A is chosen with probability

when the number of individuals with allele B is exactly i.

Also in this case, fixation probabilities when starting in state i is defined by the recurrence

And the closed form is given by

where per definition and will just be for the general case.

For a mathematical derivation of the equation above, click on "show" to reveal

Also in this case, fixation probabilities can be computed, but the transition probabilities are not symmetric. The notation and is used. The fixation probability can be defined recursively and a new variable is introduced.

Now two properties from the definition of the variable yi can be used to find a closed form solution for the fixation probabilities:

Combining (3) and xN = 1:

which implies:

This in turn gives us:


This general case where the fitness of A and B depends on the abundance of each type is studied in evolutionary game theory.

Less complex results are obtained if a constant fitness ratio , for all i, is assumed. Individuals of type A reproduce with a constant rate r and individuals with allele B reproduce with rate 1. Thus if A has a fitness advantage over B, r will be larger than one, otherwise it will be smaller than one. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

In this case is a constant factor for each composition of the population and thus the fixation probability from equation (1) simplifies to

where the fixation probability of a single mutant A in a population of otherwise all B is often of interest and is denoted by ρ.

Also in the case of selection, the expected value and the variance of the number of A individuals may be computed

where p = i/N, and r = 1 + s.

For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows

For the variance the calculation runs as follows, using the variance of a single step


Rate of evolution

In a population of all B individuals, a single mutant A will take over the whole population with the probability

If the mutation rate (to go from the B to the A allele) in the population is u then the rate with which one member of the population will mutate to A is given by N × u and the rate with which the whole population goes from all B to all A is the rate that a single mutant A arises times the probability that it will take over the population (fixation probability):

Thus if the mutation is neutral (i.e. the fixation probability is just 1/N) then the rate with which an allele arises and takes over a population is independent of the population size and is equal to the mutation rate. This important result is the basis of the neutral theory of evolution and suggests that the number of observed point mutations in the genomes of two different species would simply be given by the mutation rate multiplied by two times the time since divergence. Thus the neutral theory of evolution provides a molecular clock, given that the assumptions are fulfilled which may not be the case in reality.

See also

Related Research Articles

Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

Covariance in probability theory and statistics is a measure of the joint variability of two random variables.

In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form, by some expression involving operations on the formal series.

<span class="mw-page-title-main">Error function</span> Sigmoid shape special function

In mathematics, the error function, often denoted by erf, is a function defined as:

<span class="mw-page-title-main">Bernoulli distribution</span> Probability distribution modeling a coin toss which need not be fair

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are Boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads. In particular, unfair coins would have

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided dice rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Coupon collector's problem</span> Problem in probability theory

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.

<span class="mw-page-title-main">Hyperbolastic functions</span> Mathematical functions

The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression.

References

  1. Moran, P. A. P. (1958). "Random processes in genetics". Mathematical Proceedings of the Cambridge Philosophical Society . 54 (1): 60–71. doi:10.1017/S0305004100033193.

Further reading