Fitness-density covariance

Last updated

The fitness-density covariance (also known as *growth-density covariance*) is a coexistence mechanism that can allow similar species to coexist because they are in different locations. [1] This effect will be the strongest if species are completely segregated, but can also work if their populations overlap somewhat. If a fitness-density covariance is operating, then when a species becomes very rare, its population will shift to predominantly locations with favorable conditions (e.g., less competition or good habitat). Similarly, when a species becomes very common, then conditions will worsen where they are most common, and they will spread into areas where conditions are less favorable. This negative feedback can help species avoid being driven extinct by competition, and it can prevent stronger species from becoming too common and crowding out other species.

Along with storage effects and relative nonlinearities, fitness-density covariances make up the three variation-dependent mechanisms of modern coexistence theory. [2]

Mathematical derivation

Here, we will consider competition between n species. [1] We will define Nxj(t) as the number of individuals of species j at patch x and time t, and λxj(t) the fitness (i.e., the per-capita contribution of individual to the next time period through survival and reproduction) of individuals of species js at patch x and time t. [1] λxj(t) will be determined by many things, including habitat, intraspecific competition, and interspecific competition at x. Thus, if there are currently Nxj(t) individuals at x, then they will contribute Nxj(t)λxj(t) individuals to the next time period (i.e., t+1). Those individuals may stay at x, or they may move; the net contribution of x to next year's population will be the same.

With our definitions in place, we want to calculate the finite rate of increase of species j (i.e., its population-wide growth rate), . It is defined such that , where each average is across all space. [1] In essence, it is the average fitness of members of species j in year t. We can calculate Nj(t+1) by summing Nxj(t)λxj(t) across all patches, giving

where X is the number of patches. Defining as species j's relative density at x, this equation becomes

Using the theorem that , this simplifies to

Since , its average will be 1. Thus,

Thus, we have partitioned into two key parts: calculates the fitness of an individual, on average in any given site. Thus, if species are distributed uniformly across the landscape, . If, however, they are distributed non-randomly across the environment, then cov(νxj, λxj(t)) will be non-zero. If individuals are found predominantly in good sites, then cov(νxj, λxj(t)) will be positive; if they are found predominantly in poor sites, then cov(νxj, λxj(t)) will be negative.

To analyze how species coexist, we perform an invasion analysis. [2] In short, we remove one species (called the "invader") from the environment, and allow the other species (called the "residents") to come the equilibrium (so that for each resident). We then determine if the invader has a positive growth rate. If each species has a positive growth rate as an invader, then they can coexist.

Because for each resident, we can calculate the invader's growth rate, , as

where n-1 is the number of residents (since n is the number of species), and the sum is over all residents (and thus represents an average). [1] Using our formula for , we find that

This rearranges to

where

is the fitness-density covariance, and contains all other mechanisms (such as the spatial storage effect). [1]

Thus, if Δκ is positive, then the invader's population is more able to build up its population in good areas (i.e., νxi is higher where λxi(t) is large), compared to the residents. This can occur if the invader builds up in good areas (i.e., cov(νxi, λxi(t)) is very positive) or if the residents are forced into poor areas (i.e., cov(νxr, λxr(t)) is less positive, or negative). In either case, species gain an advantage when they are invaders, a key point of any stabilizing mechanism.

Related Research Articles

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Wavenumber</span> Spatial frequency of a wave

In the physical sciences, the wavenumber, also known as repetency, is the spatial frequency of a wave, measured in cycles per unit distance or radians per unit distance. It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time or radians per unit time.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.

In theoretical physics, a source field is a background field coupled to the original field as

Plasma parameters define various characteristics of a plasma, an electrically conductive collection of charged and neutral particles of various species that responds collectively to electromagnetic forces. Such particle systems can be studied statistically, i.e., their behaviour can be described based on a limited number of global parameters instead of tracking each particle separately.

<span class="mw-page-title-main">Vertex model</span>

A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.

<span class="mw-page-title-main">Lomax distribution</span>

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In probability theory and statistics, the normal-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and precision matrix.

In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.

Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.

The Atkinson–Stiglitz theorem is a theorem of public economics which states that "where the utility function is separable between labor and all commodities, no indirect taxes need be employed." Non-linear income taxation can be used by the government and was developed in a seminal article by Joseph Stiglitz and Anthony Atkinson in 1976. The Atkinson–Stiglitz theorem is generally considered to be one of the most important theoretical results in public economics and spawned a broad literature which delimited the conditions under which the theorem holds, e.g. Saez (2002) which showed that the Atkinson–Stiglitz theorem does not hold if households have heterogeneous rather than homogeneous preferences. In practice the Atkinson–Stiglitz theorem has often been invoked in the debate on optimal capital income taxation: Because capital income taxation can be interpreted as the taxation of future consumption in excess of the taxation of present consumption, the theorem implies that governments should abstain from capital income taxation if non-linear income taxation is an option since capital income taxation would not improve equity by comparison to the non-linear income tax, while additionally distorting savings.

In mathematics, particularly in linear algebra, the Schur product theorem states that the Hadamard product of two positive definite matrices is also a positive definite matrix. The result is named after Issai Schur

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.

The Batchelor–Chandrasekhar equation is the evolution equation for the scalar functions, defining the two-point velocity correlation tensor of a homogeneous axisymmetric turbulence, named after George Batchelor and Subrahmanyan Chandrasekhar. They developed the theory of homogeneous axisymmetric turbulence based on Howard P. Robertson's work on isotropic turbulence using an invariant principle. This equation is an extension of Kármán–Howarth equation from isotropic to axisymmetric turbulence.

<span class="mw-page-title-main">Representations of classical Lie groups</span>

In mathematics, the finite-dimensional representations of the complex classical Lie groups , , , , , can be constructed using the general representation theory of semisimple Lie algebras. The groups , , are indeed simple Lie groups, and their finite-dimensional representations coincide with those of their maximal compact subgroups, respectively , , . In the classification of simple Lie algebras, the corresponding algebras are

References

  1. 1 2 3 4 5 6 Chesson, Peter (November 2000). "General Theory of Competitive Coexistence in Spatially-Varying Environments". Theoretical Population Biology. 58 (3): 211–237. doi:10.1006/tpbi.2000.1486. PMID   11120650.
  2. 1 2 Chesson, Peter (November 2000). "Mechanisms of Maintenance of Species Diversity". Annual Review of Ecology and Systematics. 31 (1): 343–366. doi:10.1146/Annurev.Ecolsys.31.1.343. S2CID   403954.