Leslie matrix

Last updated

The Leslie matrix is a discrete, age-structured model of population growth that is very popular in population ecology named after Patrick H. Leslie. [1] [2] The Leslie matrix (also called the Leslie model) is one of the most well-known ways to describe the growth of populations (and their projected age distribution), in which a population is closed to migration, growing in an unlimited environment, and where only one sex, usually the female, is considered.

Contents

The Leslie matrix is used in ecology to model the changes in a population of organisms over a period of time. In a Leslie model, the population is divided into groups based on age classes. A similar model which replaces age classes with ontogenetic stages is called a Lefkovitch matrix, [3] whereby individuals can both remain in the same stage class or move on to the next one. At each time step, the population is represented by a vector with an element for each age class where each element indicates the number of individuals currently in that class.

The Leslie matrix is a square matrix with the same number of rows and columns as the population vector has elements. The (i,j)th cell in the matrix indicates how many individuals will be in the age class i at the next time step for each individual in stage j. At each time step, the population vector is multiplied by the Leslie matrix to generate the population vector for the subsequent time step.

To build a matrix, the following information must be known from the population:

From the observations that at time t+1 is simply the sum of all offspring born from the previous time step and that the organisms surviving to time t+1 are the organisms at time t surviving at probability , one gets . This implies the following matrix representation:

where is the maximum age attainable in the population.

This can be written as:

or:

where is the population vector at time t and is the Leslie matrix. The dominant eigenvalue of , denoted , gives the population's asymptotic growth rate (growth rate at the stable age distribution). The corresponding eigenvector provides the stable age distribution, the proportion of individuals of each age within the population, which remains constant at this point of asymptotic growth barring changes to vital rates. [4] Once the stable age distribution has been reached, a population undergoes exponential growth at rate .

The characteristic polynomial of the matrix is given by the Euler–Lotka equation.

The Leslie model is very similar to a discrete-time Markov chain. The main difference is that in a Markov model, one would have for each , while the Leslie model may have these sums greater or less than 1.

Stable age structure

This age-structured growth model suggests a steady-state, or stable, age-structure and growth rate. Regardless of the initial population size, , or age distribution, the population tends asymptotically to this age-structure and growth rate. It also returns to this state following perturbation. The Euler–Lotka equation provides a means of identifying the intrinsic growth rate. The stable age-structure is determined both by the growth rate and the survival function (i.e. the Leslie matrix). [5] For example, a population with a large intrinsic growth rate will have a disproportionately “young” age-structure. A population with high mortality rates at all ages (i.e. low survival) will have a similar age-structure.

Random Leslie model

There is a generalization of the population growth rate to when a Leslie matrix has random elements which may be correlated. [6] When characterizing the disorder, or uncertainties, in vital parameters; a perturbative formalism has to be used to deal with linear non-negative random matrix difference equations. Then the non-trivial, effective eigenvalue which defines the long-term asymptotic dynamics of the mean-value population state vector can be presented as the effective growth rate. This eigenvalue and the associated mean-value invariant state vector can be calculated from the smallest positive root of a secular polynomial and the residue of the mean-valued Green function. Exact and perturbative results can thusly be analyzed for several models of disorder.

Related Research Articles

Kinematics is a subfield of physics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics.

<span class="mw-page-title-main">Wave function</span> Mathematical description of the quantum state of a system

In quantum physics, a wave function is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ and Ψ.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

<span class="mw-page-title-main">Moment of inertia</span> Scalar measure of the rotational inertia with respect to a fixed axis of rotation

The moment of inertia, otherwise known as the mass moment of inertia, angular mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation.

<span class="mw-page-title-main">Symplectic group</span> Mathematical group

In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

<span class="mw-page-title-main">Screw theory</span> Mathematical formulation of vector pairs used in physics (rigid body dynamics)

Screw theory is the algebraic calculation of pairs of vectors, such as forces and moments or angular and linear velocity, that arise in the kinematics and dynamics of rigid bodies. The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms.

In econometrics and other applications of multivariate time series analysis, a variance decomposition or forecast error variance decomposition (FEVD) is used to aid in the interpretation of a vector autoregression (VAR) model once it has been fitted. The variance decomposition indicates the amount of information each variable contributes to the other variables in the autoregression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables.

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

<span class="mw-page-title-main">MUSIC (algorithm)</span> Algorithm used for frequency estimation and radio direction finding

MUSIC is an algorithm used for frequency estimation and radio direction finding.

<span class="mw-page-title-main">Vectorization (mathematics)</span> Conversion of a matrix or a tensor to a vector

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another:

In geometry, the polar sine generalizes the sine function of angle to the vertex angle of a polytope. It is denoted by psin.

In the study of age-structured population growth, probably one of the most important equations is the Euler–Lotka equation. Based on the age demographic of females in the population and female births, this equation allows for an estimation of how a population is growing.

In econometrics, Prais–Winsten estimation is a procedure meant to take care of the serial correlation of type AR(1) in a linear model. Conceived by Sigbert Prais and Christopher Winsten in 1954, it is a modification of Cochrane–Orcutt estimation in the sense that it does not lose the first observation, which leads to more efficiency as a result and makes it a special case of feasible generalized least squares.

In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.

In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, based on the earlier work by Alok Bhargava and John Denis Sargan in 1983, for addressing certain endogeneity problems. The GMM-SYS estimator is a system that contains both the levels and the first difference equations. It provides an alternative to the standard first difference GMM estimator.

<span class="mw-page-title-main">Complex random vector</span>

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If are complex-valued random variables, then the n-tuple is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.

References

  1. Leslie, P.H. (1945) "The use of matrices in certain population mathematics". Biometrika , 33(3), 183212.
  2. Leslie, P.H. (1948) "Some further notes on the use of matrices in population mathematics". Biometrika, 35(34), 213245.
  3. Hal Caswell (2001). Matrix Population Models: Construction, Analysis, and Interpretation. Sinauer.
  4. Mills, L. Scott. (2012). Conservation of wildlife populations: demography, genetics, and management. John Wiley & Sons. p. 104. ISBN   978-0-470-67150-4.
  5. Further details on the rate and form of convergence to the stable age-structure are provided in Charlesworth, B. (1980) Evolution in age-structured population. Cambridge. Cambridge University Press
  6. M.O. Caceres and I. Caceres-Saez, Random Leslie matrices in population dynamics, J. Math. Biol. (2011) 63:519–556 DOI 10.1007/s00285-010-0378-0

Further reading