Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. [1] The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, [2] and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random. [3]
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
where and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of the unknown function of the variable . [4]
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation). [5]
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations. [6]
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), [7] biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement and the time of an object under the force , is given by the differential equation
which constrains the motion of a particle of constant mass . In general, is a function of the position of the particle at time . The unknown function appears on both sides of the differential equation, and is indicated in the notation . [8] [9] [10] [11]
In what follows, is a dependent variable representing an unknown function of the independent variable . The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation is more useful for differentiation and integration, whereas Lagrange's notation is more useful for representing higher-order derivatives compactly, and Newton's notation is often used in physics for representing derivatives of low order with respect to time.
Given , a function of , , and derivatives of . Then an equation of the form
is called an explicit ordinary differential equation of order . [12] [13]
More generally, an implicit ordinary differential equation of order takes the form: [14]
There are further classifications:
where and are continuous functions of . [12] [15] [16]
The function is called the source term, leading to further classification. [15] [17]A number of coupled differential equations form a system of equations. If is a vector whose elements are functions; , and is a vector-valued function of and its derivatives, then
is an explicit system of ordinary differential equations of order and dimension. In column vector form:
These are not necessarily linear. The implicit analogue is:
where is the zero vector. In matrix form
For a system of the form , some sources also require that the Jacobian matrix be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. [18] [19] [20] Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme,[ citation needed ] although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, [21] which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
Given a differential equation
a function , where is an interval, is called a solution or integral curve for , if is -times differentiable on , and
Given two solutions and , is called an extension of if and
A solution that has no extension is called a maximal solution. A solution defined on all of is called a global solution.
A general solution of an th-order equation is a solution containing arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. [22] A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution. [23]
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, [24] meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the th degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.
Two memoirs by Fuchs [25] inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces under rational one-to-one transformations.
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations. [26]
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. [27] SLPs are also useful in the analysis of certain partial differential equations.
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
Theorem | Assumption | Conclusion |
---|---|---|
Peano existence theorem | continuous | local existence only |
Picard–Lindelöf theorem | Lipschitz continuous | local existence and uniqueness |
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone. [28]
The theorem can be stated simply as follows. [29] For the equation and initial value problem: if and are continuous in a closed rectangle in the plane, where and are real (symbolically: ) and denotes the Cartesian product, square brackets denote closed intervals, then there is an interval for some where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on to be linear, this applies to non-linear equations that take the form , and it can also be applied to systems of equations.
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely: [30]
For each initial condition there exists a unique maximum (possibly infinite) open interval
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain .
In the case that , there are exactly two possibilities
where is the open set in which is defined, and is its boundary.
Note that the maximum domain of the solution
This means that , which is and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all since the solution is
which has maximum domain:
This shows clearly that the maximum interval may depend on the initial conditions. The domain of could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not because
which is one of the two possible cases according to the above theorem.
Differential equations are usually easier to solve if the order of the equation can be reduced.
Any explicit differential equation of order ,
can be written as a system of first-order differential equations by defining a new family of unknown functions
for . The -dimensional system of first-order coupled differential equations is then
more compactly in vector notation:
where
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below, , , , , and , are any integrable functions of , ; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, and are dummy variables of integration (the continuum analogues of indices in summation), and the notation just means to integrate with respect to , then after the integration substitute , without adding constants (explicitly stated).
Differential equation | Solution method | General solution |
---|---|---|
First-order, separable in and (general case, see below for special cases) [31] | Separation of variables (divide by ). | |
First-order, separable in [29] | Direct integration. | |
First-order, autonomous, separable in [29] | Separation of variables (divide by ). | |
First-order, separable in and [29] | Integrate throughout. |
Differential equation | Solution method | General solution |
---|---|---|
First-order, homogeneous [29] | Set y = ux, then solve by separation of variables in u and x. | |
First-order, separable [31] | Separation of variables (divide by ). | If , the solution is . |
Exact differential, first-order [29] where | Integrate throughout. | where and |
Inexact differential, first-order [29] where | Integration factor satisfying | If can be found in a suitable way, then where and |
Differential equation | Solution method | General solution |
---|---|---|
Second-order, autonomous [32] | Multiply both sides of equation by 2dy/dx, substitute , then integrate twice. |
Differential equation | Solution method | General solution |
---|---|---|
First-order, linear, inhomogeneous, function coefficients [29] | Integrating factor: | Armour formula: |
Second-order, linear, inhomogeneous, function coefficients | Integrating factor: | |
Second-order, linear, inhomogeneous, constant coefficients [33] | Complementary function : assume , substitute and solve polynomial in , to find the linearly independent functions . Particular integral : in general the method of variation of parameters, though for very simple inspection may work. [29] | If , then If , then If , then |
th-order, linear, inhomogeneous, constant coefficients [33] | Complementary function : assume , substitute and solve polynomial in , to find the linearly independent functions . Particular integral : in general the method of variation of parameters, though for very simple inspection may work. [29] | Since are the solutions of the polynomial of degree : , then: for all different, for each root repeated times, for some complex, then setting , and using Euler's formula, allows some terms in the previous results to be written in the form where is an arbitrary constant (phase shift). |
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
In mathematics and physics, a vector space is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.
In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.
In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the degree. That is, if k is an integer, a function f of n variables is homogeneous of degree k if
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation: where ∇2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is The equation has the property that, if u and its first time derivative are arbitrarily specified initial data on the line t = 0, then there exists a solution for all time t.
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.
The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering.
Direct linear transformation (DLT) is an algorithm which solves a set of variables from a set of similarity relations:
In stochastic analysis, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example a Wiener process. The theory was developed in the 1990s by Terry Lyons. Several accounts of the theory are available.
In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.