In mathematics, an integrodifference equation is a recurrence relation on a function space, of the following form:
where is a sequence in the function space and is the domain of those functions. In most applications, for any , is a probability density function on . Note that in the definition above, can be vector valued, in which case each element of has a scalar valued integrodifference equation associated with it. Integrodifference equations are widely used in mathematical biology, especially theoretical ecology, to model the dispersal and growth of populations. In this case, is the population size or density at location at time , describes the local population growth at location and , is the probability of moving from point to point , often referred to as the dispersal kernel. Integrodifference equations are most commonly used to describe univoltine populations, including, but not limited to, many arthropod, and annual plant species. However, multivoltine populations can also be modeled with integrodifference equations, [1] as long as the organism has non-overlapping generations. In this case, is not measured in years, but rather the time increment between broods.
In one spatial dimension, the dispersal kernel often depends only on the distance between the source and the destination, and can be written as . In this case, some natural conditions on f and k imply that there is a well-defined spreading speed for waves of invasion generated from compact initial conditions. The wave speed is often calculated by studying the linearized equation
where . This can be written as the convolution
Using a moment-generating-function transformation
it has been shown that the critical wave speed
Other types of equations used to model population dynamics through space include reaction–diffusion equations and metapopulation equations. However, diffusion equations do not as easily allow for the inclusion of explicit dispersal patterns and are only biologically accurate for populations with overlapping generations. [2] Metapopulation equations are different from integrodifference equations in the fact that they break the population down into discrete patches rather than a continuous landscape.
In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms linear differential equations into algebraic equations and convolution into multiplication.
In mathematics, the Dirac delta function, also known as the unit impulse symbol, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. It can also be interpreted as a linear functional that maps every function to its value at zero, or as the weak limit of a sequence of bump functions, which are zero over most of the real line, with a tall spike at the origin. Bump functions are thus sometimes called "approximate" or "nascent" delta functions.
In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function f : U → R, where U is an open subset of Rn, that satisfies Laplace's equation, that is,
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the inverse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.
In mathematics, the Fredholm integral equation is an integral equation whose solution gives rise to Fredholm theory, the study of Fredholm kernels and Fredholm operators. The integral equation was studied by Ivar Fredholm. A useful method to solve such equations, the Adomian decomposition method, is due to George Adomian.
In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero, and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.
In mathematics, Harnack's inequality is an inequality relating the values of a positive harmonic function at two points, introduced by A. Harnack (1887). J. Serrin (1955), and J. Moser generalized Harnack's inequality to solutions of elliptic or parabolic partial differential equations. Perelman's solution of the Poincaré conjecture uses a version of the Harnack inequality, found by R. Hamilton (1993), for the Ricci flow. Harnack's inequality is used to prove Harnack's theorem about the convergence of sequences of harmonic functions. Harnack's inequality can also be used to show the interior regularity of weak solutions of partial differential equations.
In complex analysis, functional analysis and operator theory, a Bergman space is a function space of holomorphic functions in a domain D of the complex plane that are sufficiently well-behaved at the boundary that they are absolutely integrable. Specifically, for 0 < p < ∞, the Bergman space Ap(D) is the space of all holomorphic functions in D for which the p-norm is finite:
In the mathematical study of heat conduction and diffusion, a heat kernel is the fundamental solution to the heat equation on a specified domain with appropriate boundary conditions. It is also one of the main tools in the study of the spectrum of the Laplace operator, and is thus of some auxiliary importance throughout mathematical physics. The heat kernel represents the evolution of temperature in a region whose boundary is held fixed at a particular temperature, such that an initial unit of heat energy is placed at a point at time t = 0.
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants C, α>0, such that
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.
In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(dω). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as
In mathematics, the Bussgang theorem is a theorem of stochastic analysis. The theorem states that the crosscorrelation of a Gaussian signal before and after it has passed through a nonlinear operation are equal up to a constant. It was first published by Julian J. Bussgang in 1952 while he was at the Massachusetts Institute of Technology.
In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
In mathematical analysis, Haar's tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood tauberian theorem.