Exact differential equation

Last updated

In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.

Contents

Definition

Given a simply connected and open subset D of and two functions I and J which are continuous on D, an implicit first-order ordinary differential equation of the form

is called an exact differential equation if there exists a continuously differentiable function F, called the potential function, [1] [2] so that

and

An exact equation may also be presented in the following form:

where the same constraints on I and J apply for the differential equation to be exact.

The nomenclature of "exact differential equation" refers to the exact differential of a function. For a function , the exact or total derivative with respect to is given by

Example

The function given by

is a potential function for the differential equation

First order exact differential equations

Identifying first order exact differential equations

Let the functions , , , and , where the subscripts denote the partial derivative with respect to the relative variable, be continuous in the region . Then the differential equation

is exact if and only if

That is, there exists a function , called a potential function, such that

So, in general:

Proof

The proof has two parts.

First, suppose there is a function such that

It then follows that

Since and are continuous, then and are also continuous which guarantees their equality.

The second part of the proof involves the construction of and can also be used as a procedure for solving first-order exact differential equations. Suppose that and let there be a function for which

Begin by integrating the first equation with respect to . In practice, it doesn't matter if you integrate the first or the second equation, so long as the integration is done with respect to the appropriate variable.

where is any differentiable function such that . The function plays the role of a constant of integration, but instead of just a constant, it is function of , since is a function of both and and we are only integrating with respect to .

Now to show that it is always possible to find an such that .

Differentiate both sides with respect to .

Set the result equal to and solve for .

In order to determine from this equation, the right-hand side must depend only on . This can be proven by showing that its derivative with respect to is always zero, so differentiate the right-hand side with respect to .

Since ,

Now, this is zero based on our initial supposition that

Therefore,

And this completes the proof.

Solutions to first order exact differential equations

First order exact differential equations of the form

can be written in terms of the potential function

where

This is equivalent to taking the exact differential of .

The solutions to an exact differential equation are then given by

and the problem reduces to finding .

This can be done by integrating the two expressions and and then writing down each term in the resulting expressions only once and summing them up in order to get .

The reasoning behind this is the following. Since

it follows, by integrating both sides, that

Therefore,

where and are differentiable functions such that and .

In order for this to be true and for both sides to result in the exact same expression, namely , then must be contained within the expression for because it cannot be contained within , since it is entirely a function of and not and is therefore not allowed to have anything to do with . By analogy, must be contained within the expression .

Ergo,

for some expressions and . Plugging in into the above equation, we find that

and so and turn out to be the same function. Therefore,

Since we already showed that

it follows that

So, we can construct by doing and and then taking the common terms we find within the two resulting expressions (that would be ) and then adding the terms which are uniquely found in either one of them - and .

Second order exact differential equations

The concept of exact differential equations can be extended to second order equations. [3] Consider starting with the first-order exact equation:

Since both functions , are functions of two variables, implicitly differentiating the multivariate function yields

Expanding the total derivatives gives that

and that

Combining the terms gives

If the equation is exact, then . Additionally, the total derivative of is equal to its implicit ordinary derivative . This leads to the rewritten equation

Now, let there be some second-order differential equation

If for exact differential equations, then

and

where is some arbitrary function only of that was differentiated away to zero upon taking the partial derivative of with respect to . Although the sign on could be positive, it is more intuitive to think of the integral's result as that is missing some original extra function that was partially differentiated to zero.

Next, if

then the term should be a function only of and , since partial differentiation with respect to will hold constant and not produce any derivatives of . In the second order equation

only the term is a term purely of and . Let . If , then

Since the total derivative of with respect to is equivalent to the implicit ordinary derivative , then

So,

and

Thus, the second order differential equation

is exact only if and only if the below expression

is a function solely of . Once is calculated with its arbitrary constant, it is added to to make . If the equation is exact, then we can reduce to the first order exact form which is solvable by the usual method for first-order exact equations.

Now, however, in the final implicit solution there will be a term from integration of with respect to twice as well as a , two arbitrary constants as expected from a second-order equation.

Example

Given the differential equation

one can always easily check for exactness by examining the term. In this case, both the partial and total derivative of with respect to are , so their sum is , which is exactly the term in front of . With one of the conditions for exactness met, one can calculate that

Letting , then

So, is indeed a function only of and the second order differential equation is exact. Therefore, and . Reduction to a first-order exact equation yields

Integrating with respect to yields

where is some arbitrary function of . Differentiating with respect to gives an equation correlating the derivative and the term.

So, and the full implicit solution becomes

Solving explicitly for yields

Higher order exact differential equations

The concepts of exact differential equations can be extended to any order. Starting with the exact second order equation

it was previously shown that equation is defined such that

Implicit differentiation of the exact second-order equation times will yield an th order differential equation with new conditions for exactness that can be readily deduced from the form of the equation produced. For example, differentiating the above second-order differential equation once to yield a third-order exact equation gives the following form

where

and where is a function only of and . Combining all and terms not coming from gives

Thus, the three conditions for exactness for a third-order differential equation are: the term must be , the term must be and

must be a function solely of .

Example

Consider the nonlinear third-order differential equation

If , then is and which together sum to . Fortunately, this appears in our equation. For the last condition of exactness,

which is indeed a function only of . So, the differential equation is exact. Integrating twice yields that . Rewriting the equation as a first-order exact differential equation yields

Integrating with respect to gives that . Differentiating with respect to and equating that to the term in front of in the first-order equation gives that and that . The full implicit solution becomes

The explicit solution, then, is

See also

Related Research Articles

In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the Fundamental Theorem of Multivariate Calculus.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Schrödinger equation</span> Description of a quantum-mechanical system

The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

<span class="mw-page-title-main">Stream function</span> Function for incompressible divergence-free flows in two dimensions

The stream function is defined for incompressible (divergence-free) flows in two dimensions – as well as in three dimensions with axisymmetry. The flow velocity components can be expressed as the derivatives of the scalar stream function. The stream function can be used to plot streamlines, which represent the trajectories of particles in a steady flow. The two-dimensional Lagrange stream function was introduced by Joseph Louis Lagrange in 1781. The Stokes stream function is for axisymmetrical three-dimensional flow, and is named after George Gabriel Stokes.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In multivariate calculus, a differential or differential form is said to be exact or perfect, as contrasted with an inexact differential, if it is equal to the general differential for some differentiable function  in an orthogonal coordinate system.

A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.

<span class="mw-page-title-main">Path integral formulation</span> Formulation of quantum mechanics

The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

<span class="mw-page-title-main">Particle in a spherically symmetric potential</span>

In quantum mechanics, a particle in a spherically symmetric potential is a system with a potential that depends only on the distance between the particle and a center. A particle in a spherically symmetric potential can be used as an approximation, for example, of the electron in a hydrogen atom or of the formation of chemical bonds.

In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is:

In mathematics, a first-order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In mathematics, the Beltrami equation, named after Eugenio Beltrami, is the partial differential equation

In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.

References

  1. Wolfgang Walter (11 March 2013). Ordinary Differential Equations. Springer Science & Business Media. ISBN   978-1-4612-0601-9.
  2. Vladimir A. Dobrushkin (16 December 2014). Applied Differential Equations: The Primary Course. CRC Press. ISBN   978-1-4987-2835-5.
  3. Tenenbaum, Morris; Pollard, Harry (1963). "Solution of the Linear Differential Equation with Nonconstant Coefficients. Reduction of Order Method.". Ordinary Differential Equations: An Elementary Textbook for Students of Mathematics, Engineering and the Sciences . New York: Dover. pp.  248. ISBN   0-486-64940-7.

Further reading