Delay differential equation

Last updated

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs: [1]

Contents

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation (constant and known delays), it leads to the same degree of complexity in the control design. In worst cases (time-varying delays, for instance), it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system. [2]
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).

A general form of the time-delay differential equation for is

where represents the trajectory of the solution in the past. In this equation, is a functional operator from to

Examples

Solving DDEs

DDEs are mostly solved in a stepwise fashion with a principle called the method of steps. For instance, consider the DDE with a single delay

with given initial condition . Then the solution on the interval is given by which is the solution to the inhomogeneous initial value problem

with . This can be continued for the successive intervals by using the solution to the previous interval as inhomogeneous term. In practice, the initial value problem is often solved numerically.

Example

Suppose and . Then the initial value problem can be solved with integration,

i.e., , where the initial condition is given by . Similarly, for the interval we integrate and fit the initial condition,

i.e.,

Reduction to ODE

In some cases, differential equations can be represented in a format that looks like delay differential equations.

The characteristic equation

Similar to ODEs, many properties of linear DDEs can be characterized and analyzed using the characteristic equation. [5] The characteristic equation associated with the linear DDE with discrete delays

is

The roots λ of the characteristic equation are called characteristic roots or eigenvalues and the solution set is often referred to as the spectrum. Because of the exponential in the characteristic equation, the DDE has, unlike the ODE case, an infinite number of eigenvalues, making a spectral analysis more involved. The spectrum does however have some properties which can be exploited in the analysis. For instance, even though there are an infinite number of eigenvalues, there are only a finite number of eigenvalues in any vertical strip of the complex plane. [6]

This characteristic equation is a nonlinear eigenproblem and there are many methods to compute the spectrum numerically. [7] [8] In some special situations it is possible to solve the characteristic equation explicitly. Consider, for example, the following DDE:

The characteristic equation is

There are an infinite number of solutions to this equation for complex λ. They are given by

where Wk is the kth branch of the Lambert W function, so:

Another example

The following DDE: [9]

Have as solution in the function: [10]

with the Fabius function.

Applications

See also

Related Research Articles

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.

<span class="mw-page-title-main">Legendre function</span>

In physical science and mathematics, the Legendre functionsPλ, Qλ and associated Legendre functionsPμ
λ
, Qμ
λ
, and Legendre functions of the second kind, Qn, are all solutions of Legendre's differential equation. The Legendre polynomials and the associated Legendre polynomials are also solutions of the differential equation in special cases, which, by virtue of being polynomials, have a large number of additional properties, mathematical structure, and applications. For these polynomial solutions, see the separate Wikipedia articles.

In fractional calculus, an area of mathematical analysis, the differintegral is a combined differentiation/integration operator. Applied to a function ƒ, the q-differintegral of f, here denoted by

<span class="mw-page-title-main">Exponential decay</span> Decrease in value at a rate proportional to the current value

A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant:

The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability.

<span class="mw-page-title-main">Weierstrass elliptic function</span> Class of mathematical functions

In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions, i.e., meromorphic functions that are doubly periodic. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice.

In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.

The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In mathematics, the theta representation is a particular representation of the Heisenberg group of quantum mechanics. It gains its name from the fact that the Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. The representation was popularized by David Mumford.

In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.

<span class="mw-page-title-main">Modular lambda function</span> Symmetric holomorphic function

In mathematics, the modular lambda function λ(τ) is a highly symmetric Holomorphic function on the complex upper half-plane. It is invariant under the fractional linear action of the congruence group Γ(2), and generates the function field of the corresponding quotient, i.e., it is a Hauptmodul for the modular curve X(2). Over any point τ, its value can be described as a cross ratio of the branch points of a ramified double cover of the projective line by the elliptic curve , where the map is defined as the quotient by the [−1] involution.

An affine term structure model is a financial model that relates zero-coupon bond prices to a spot rate model. It is particularly useful for deriving the yield curve – the process of determining spot rate model inputs from observable bond market data. The affine class of term structure models implies the convenient form that log bond prices are linear functions of the spot rate.

Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.

References

  1. Richard, Jean-Pierre (2003). "Time Delay Systems: An overview of some recent advances and open problems". Automatica. 39 (10): 1667–1694. doi:10.1016/S0005-1098(03)00167-5.
  2. Lavaei, Javad; Sojoudi, Somayeh; Murray, Richard M. (2010). "Simple delay-based implementation of continuous-time controllers". Proceedings of the 2010 American Control Conference. pp. 5781–5788. doi:10.1109/ACC.2010.5530439. ISBN   978-1-4244-7427-1. S2CID   1200900.
  3. Griebel, Thomas (2017-01-01). "The pantograph equation in quantum calculus". Masters Theses.
  4. Ockendon, John Richard; Tayler, A. B.; Temple, George Frederick James (1971-05-04). "The dynamics of a current collection system for an electric locomotive". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 322 (1551): 447–468. Bibcode:1971RSPSA.322..447O. doi:10.1098/rspa.1971.0078. S2CID   110981464.
  5. Michiels, Wim; Niculescu, Silviu-Iulian (2007). Stability and Stabilization of Time-Delay Systems. Advances in Design and Control. Society for Industrial and Applied Mathematics. pp. 3–32. doi:10.1137/1.9780898718645. ISBN   978-0-89871-632-0.
  6. Michiels, Wim; Niculescu, Silviu-Iulian (2007). Stability and Stabilization of Time-Delay Systems. Advances in Design and Control. Society for Industrial and Applied Mathematics. p. 9. doi:10.1137/1.9780898718645. ISBN   978-0-89871-632-0.
  7. Michiels, Wim; Niculescu, Silviu-Iulian (2007). Stability and Stabilization of Time-Delay Systems. Advances in Design and Control. Society for Industrial and Applied Mathematics. pp. 33–56. doi:10.1137/1.9780898718645. ISBN   978-0-89871-632-0.
  8. Appeltans, Pieter; Michiels, Wim (2023-04-29). "Analysis and controller-design of time-delay systems using TDS-CONTROL. A tutorial and manual". arXiv: 2305.00341 [math.OC].
  9. Juan Arias de Reyna (2017). "Arithmetic of the Fabius function". arXiv: 1702.06487 [math.NT].
  10. "A288163 - Oeis".
  11. Makroglou, Athena; Li, Jiaxu; Kuang, Yang (2006-03-01). "Mathematical models and software tools for the glucose-insulin regulatory system and diabetes: an overview". Applied Numerical Mathematics. Selected Papers, The Third International Conference on the Numerical Solutions of Volterra and Delay Equations. 56 (3): 559–573. doi:10.1016/j.apnum.2005.04.023. ISSN   0168-9274.
  12. Salpeter, Edwin E.; Salpeter, Shelley R. (1998-02-15). "Mathematical Model for the Epidemiology of Tuberculosis, with Estimates of the Reproductive Number and Infection-Delay Function". American Journal of Epidemiology. 147 (4): 398–406. doi: 10.1093/oxfordjournals.aje.a009463 . ISSN   0002-9262. PMID   9508108.
  13. Kajiwara, Tsuyoshi; Sasaki, Toru; Takeuchi, Yasuhiro (2012-08-01). "Construction of Lyapunov functionals for delay differential equations in virology and epidemiology". Nonlinear Analysis: Real World Applications. 13 (4): 1802–1826. doi:10.1016/j.nonrwa.2011.12.011. ISSN   1468-1218.
  14. Gopalsamy, K. (1992). Stability and Oscillations in Delay Differential Equations of Population Dynamics. Mathematics and Its Applications. Dordrecht, NL: Kluwer Academic Publishers. doi:10.1007/978-94-015-7920-9. ISBN   978-0792315940.
  15. Kuang, Y. (1993). Delay Differential Equations with Applications in Population Dynamics. Mathematics in Science and Engineering. San Diego, CA: Academic Press. ISBN   978-0080960029.
  16. López, Álvaro G. (2020-09-01). "On an electrodynamic origin of quantum fluctuations". Nonlinear Dynamics. 102 (1): 621–634. arXiv: 2001.07392 . doi:10.1007/s11071-020-05928-5. ISSN   1573-269X. S2CID   210838940.

Further reading