A functional differential equation is a differential equation with deviating argument. That is, a functional differential equation is an equation that contains a function and some of its derivatives evaluated at different argument values. [1]
Functional differential equations find use in mathematical models that assume a specified behavior or phenomenon depends on the present as well as the past state of a system. [2] In other words, past events explicitly influence future results. For this reason, functional differential equations are more applicable than ordinary differential equations (ODE), in which future behavior only implicitly depends on the past.
Unlike ordinary differential equations, which contain a function of one variable and its derivatives evaluated with the same input, functional differential equations contain a function and its derivatives evaluated with different input values.
The simplest type of functional differential equation called the retarded functional differential equation or retarded differential difference equation, is of the form [3]
The simplest, fundamental functional differential equation is the linear first-order delay differential equation [4] [ unreliable source? ] which is given by
where are constants, is some continuous function, and is a scalar. Below is a table with a comparison of several ordinary and functional differential equations.
Ordinary differential equation | Functional differential equation | |
---|---|---|
Examples | ||
"Functional differential equation" is the general name for a number of more specific types of differential equations that are used in numerous applications. There are delay differential equations, integro-differential equations, and so on.
Differential difference equations are functional differential equations in which the argument values are discrete. [1] The general form for functional differential equations of finitely many discrete deviating arguments is
where and
Differential difference equations are also referred to as retarded, neutral, advanced, and mixed functional differential equations. This classification depends on whether the rate of change of the current state of the system depends on past values, future values, or both. [5]
Classifications of differential difference equations [6] | |
---|---|
Retarded | |
Neutral | |
Advanced |
Functional differential equations of retarded type occur when for the equation given above. In other words, this class of functional differential equations depends on the past and present values of the function with delays.
A simple example of a retarded functional differential equation is
whereas a more general form for discrete deviating arguments can be written as
Functional differential equations of neutral type, or neutral differential equations occur when
Neutral differential equations depend on past and present values of the function, similarly to retarded differential equations, except it also depends on derivatives with delays. In other words, retarded differential equations do not involve the given function's derivative with delays while neutral differential equations do.
Integro-differential equations of Volterra type are functional differential equations with continuous argument values. [1] Integro-differential equations involve both the integrals and derivatives of some function with respect to its argument.
The continuous integro-differential equation for retarded functional differential equations, , can be written as
Functional differential equations have been used in models that determine future behavior of a certain phenomenon determined by the present and the past. Future behavior of phenomena, described by the solutions of ODEs, assumes that behavior is independent of the past. [2] However, there can be many situations that depend on past behavior.
FDEs are applicable for models in multiple fields, such as medicine, mechanics, biology, and economics. FDEs have been used in research for heat-transfer, signal processing, evolution of a species, traffic flow and study of epidemics. [1] [4]
A logistic equation for population growth is given by
where ρ is the reproduction rate and k is the carrying capacity. represents the population size at time t, and is the density-dependent reproduction rate. [7]
If we were to now apply this to an earlier time , we get
Upon exposure to applications of ordinary differential equations, many come across the mixing model of some chemical solution.
Suppose there is a container holding liters of salt water. Salt water is flowing in, and out of the container at the same rate of liters per second. In other words, the rate of water flowing in is equal to the rate of the salt water solution flowing out. Let be the amount in liters of salt water in the container and be the uniform concentration in grams per liter of salt water at time . Then, we have the differential equation [8]
The problem with this equation is that it makes the assumption that every drop of water that enters the contain is instantaneously mixed into the solution. This can be eliminated by using a FDE instead of an ODE.
Let be the average concentration at time , rather than uniform. Then, let's assume the solution leaving the container at time is equal to , the average concentration at some earlier time. Then, the equation is a delay-differential equation of the form [8]
The Lotka–Volterra predator-prey model was originally developed to observe the population of sharks and fish in the Adriatic Sea; however, this model has been used in many other fields for different uses, such as describing chemical reactions. Modelling predatory-prey population has always been widely researched, and as a result, there have been many different forms of the original equation.
One example, as shown by Xu, Wu (2013), [9] of the Lotka–Volterra model with time-delay is given below:
where denotes the prey population density at time t, and denote the density of the predator population at time and
Examples of other models that have used FDEs, namely RFDEs, are given below:
In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler. Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functions" and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.
In mathematical analysis, Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if the double integral yields a finite answer when the integrand is replaced by its absolute value.
In differential geometry, the Lie derivative, named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field, along the flow defined by another vector field. This change is coordinate invariant and therefore the Lie derivative is defined on any differentiable manifold.
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory.
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829). Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.
Bipolar coordinates are a two-dimensional orthogonal coordinate system based on the Apollonian circles. Confusingly, the same term is also sometimes used for two-center bipolar coordinates. There is also a third system, based on two poles.
In mathematics, specifically the theory of elliptic functions, the nome is a special function that belongs to the non-elementary functions. This function is of great importance in the description of the elliptic functions, especially in the description of the modular identity of the Jacobi theta function, the Hermite elliptic transcendents and the Weber modular functions, that are used for solving equations of higher degrees.
In mathematics, Grönwall's inequality allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.
In mathematics, the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions. The identities were first discovered and proved by Leonard James Rogers (1894), and were subsequently rediscovered by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof. Issai Schur (1917) independently rediscovered and proved the identities.
In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.
In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.
The Rogers–Ramanujan continued fraction is a continued fraction discovered by Rogers (1894) and independently by Srinivasa Ramanujan, and closely related to the Rogers–Ramanujan identities. It can be evaluated explicitly for a broad class of values of its argument.
Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
The Galves–Löcherbach model is a mathematical model for a network of neurons with intrinsic stochasticity.