Moran's theorem

Last updated

In population ecology, Moran's theorem (or the Moran effect) states that the time correlation of two separate populations of the same species is equal to the correlation between the environmental variabilities where they live.

The theorem is named after Pat Moran, who stated it in a paper on the dynamics of the Canadian lynx populations. [1] It has been used to explain the synchronization of widely dispersed populations. It has the important consequence for conservation ecology that viability of spatially structured populations is lower than one would expect from the local populations: it increases the probability that several local populations go extinct simultaneously. [2]

In its original form it stated: If the two populations have population dynamics given by

where is the population size of population , is a linear renewal function updating the populations in the same way, and the environmental variabilities. Then ; where is the correlation between the populations and the correlation between their environments. This means that the populations are correlated by their environments without any other explicit coupling term and this effect does not rely on a particular form of the renewal function .

The original form assumed a strictly linear structure, but this assumption can be weakened to allow for non-linear functions. It has been suggested that the term "Moran effect" should be used for systems that do not strictly follow the original description. [3] In the general case the correlations will be lower, and the accuracy of the Moran description depends on whether the populations tend to converge to an equilibrium state (good accuracy for low variance variability) or tend to oscillate (eventual breakdown of the correlation). [4]

It has been tested experimentally in a number of cases, such as variation of fruit production, [5] acorn production, [6] bird populations [7] and coral reef fishes. [8]

Related Research Articles

<span class="mw-page-title-main">Quantum decoherence</span> Loss of quantum coherence

Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wave function is used to explain various quantum effects. As long as there exists a definite phase relation between different states, the system is said to be coherent. A definite phase relationship is necessary to perform quantum computing on quantum information encoded in quantum states. Coherence is preserved under the laws of quantum physics.

In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying fluids, such as ionized gases, electrolytes, and charge carriers in electronic conductors . In a fluid, with a given permittivity ε, composed of electrically charged constituent particles, each pair of particles interact through the Coulomb force as

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

<span class="mw-page-title-main">Polarization density</span> Vector field describing the density of electric dipole moments in a dielectric material

In classical electromagnetism, polarization density is the vector field that expresses the density of permanent or induced electric dipole moments in a dielectric material. When a dielectric is placed in an external electric field, its molecules gain electric dipole moment and the dielectric is said to be polarized. The electric dipole moment induced per unit volume of the dielectric material is called the electric polarization of the dielectric.

<span class="mw-page-title-main">Regression dilution</span>

Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero, caused by errors in the independent variable.

<span class="mw-page-title-main">Simple linear regression</span> Linear regression model with a single explanatory variable

In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space. Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems.

In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.

In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

The maximum potential intensity of a tropical cyclone is the theoretical limit of the strength of a tropical cyclone.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

In mathematics, Maschke's theorem, named after Heinrich Maschke, is a theorem in group representation theory that concerns the decomposition of representations of a finite group into irreducible pieces. Maschke's theorem allows one to make general conclusions about representations of a finite group G without actually computing them. It reduces the task of classifying all representations to a more manageable task of classifying irreducible representations, since when the theorem applies, any representation is a direct sum of irreducible pieces (constituents). Moreover, it follows from the Jordan–Hölder theorem that, while the decomposition into a direct sum of irreducible subrepresentations may not be unique, the irreducible pieces have well-defined multiplicities. In particular, a representation of a finite group over a field of characteristic zero is determined up to isomorphism by its character.

Slip ratio in gas–liquid (two-phase) flow, is defined as the ratio of the velocity of the gas phase to the velocity of the liquid phase.

The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the most likely outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.

<span class="mw-page-title-main">Sample complexity</span>

The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used.

References

  1. Moran, P. A. P. (1953). "The statistical analysis of the Canadian lynx cycle. II. Synchronization and meteorology". Australian Journal of Zoology. 1: 291–298. doi:10.1071/zo9530291.
  2. Jörgen Ripa, Theoretical Population Ecology and Evolution Group, Equation of the month: the Moran effect
  3. Esa Ranta, Veijo Kaitala, Per Lundberg, Ecology of Populations, Cambridge University Press, 2006 p. 78
  4. Royama, T. (2005). "Moran effect on nonlinear population processes". Ecological Monographs. 75: 277–293. doi:10.1890/04-0770.
  5. Rosenstock, T. S.; Hastings, A.; Koenig, W. D.; Lyles, D. J.; Brown, P. H. (2011). "Testing Moran's theorem in an agroecosystem". Oikos. 120: 1434–1440. doi:10.1111/j.1600-0706.2011.19360.x.
  6. Koenig, WD; Knops, JM (Jan 2013). "Large-scale spatial synchrony and cross-synchrony in acorn production by two California oaks". Ecology. 94 (1): 83–93. doi:10.1890/12-0940.1.
  7. SÆTHER, B.-E.; Engen, S.; GRØTAN, V.; Fiedler, W.; Matthysen, E.; Visser, M. E.; Wright, J.; MØLLER, A. P.; Adriaensen, F.; VAN Balen, H.; Balmer, D.; Mainwaring, M. C.; Mccleery, R. H.; Pampus, M.; Winkel, W. (2007). "The extended Moran effect and large-scale synchronous fluctuations in the size of great tit and blue tit populations". Journal of Animal Ecology. 76: 315–325. doi: 10.1111/j.1365-2656.2006.01195.x .
  8. Cheal, AJ; Delean, S; Sweatman, H; Thompson, AA (Jan 2007). "Spatial synchrony in coral reef fish populations and the influence of climate". Ecology. 88 (1): 158–69. doi:10.1890/0012-9658(2007)88[158:ssicrf]2.0.co;2.