This article needs additional citations for verification .(December 2009) |
Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
Given a simply connected and open subset D of and two functions I and J which are continuous on D, an implicit first-order ordinary differential equation of the form
is called an exact differential equation if there exists a continuously differentiable function F, called the potential function, [1] [2] so that
and
An exact equation may also be presented in the following form:
where the same constraints on I and J apply for the differential equation to be exact.
The nomenclature of "exact differential equation" refers to the exact differential of a function. For a function , the exact or total derivative with respect to is given by
The function given by
is a potential function for the differential equation
Let the functions , , , and , where the subscripts denote the partial derivative with respect to the relative variable, be continuous in the region . Then the differential equation
is exact if and only if
That is, there exists a function , called a potential function, such that
So, in general:
The proof has two parts.
First, suppose there is a function such that
It then follows that
Since and are continuous, then and are also continuous which guarantees their equality.
The second part of the proof involves the construction of and can also be used as a procedure for solving first-order exact differential equations. Suppose that and let there be a function for which
Begin by integrating the first equation with respect to . In practice, it doesn't matter if you integrate the first or the second equation, so long as the integration is done with respect to the appropriate variable.
where is any differentiable function such that . The function plays the role of a constant of integration, but instead of just a constant, it is function of , since is a function of both and and we are only integrating with respect to .
Now to show that it is always possible to find an such that .
Differentiate both sides with respect to .
Set the result equal to and solve for .
In order to determine from this equation, the right-hand side must depend only on . This can be proven by showing that its derivative with respect to is always zero, so differentiate the right-hand side with respect to .
Since , Now, this is zero based on our initial supposition that
Therefore,
And this completes the proof.
First-order exact differential equations of the form
can be written in terms of the potential function
where
This is equivalent to taking the total derivative of .
The solutions to an exact differential equation are then given by
and the problem reduces to finding .
This can be done by integrating the two expressions and and then writing down each term in the resulting expressions only once and summing them up in order to get .
The reasoning behind this is the following. Since
it follows, by integrating both sides, that
Therefore,
where and are differentiable functions such that and .
In order for this to be true and for both sides to result in the exact same expression, namely , then must be contained within the expression for because it cannot be contained within , since it is entirely a function of and not and is therefore not allowed to have anything to do with . By analogy, must be contained within the expression .
Ergo,
for some expressions and . Plugging in into the above equation, we find that and so and turn out to be the same function. Therefore,
Since we already showed that
it follows that
So, we can construct by doing and and then taking the common terms we find within the two resulting expressions (that would be ) and then adding the terms which are uniquely found in either one of them – and .
The concept of exact differential equations can be extended to second-order equations. [3] Consider starting with the first-order exact equation:
Since both functions , are functions of two variables, implicitly differentiating the multivariate function yields
Expanding the total derivatives gives that
and that
Combining the terms gives
If the equation is exact, then . Additionally, the total derivative of is equal to its implicit ordinary derivative . This leads to the rewritten equation
Now, let there be some second-order differential equation
If for exact differential equations, then
and
where is some arbitrary function only of that was differentiated away to zero upon taking the partial derivative of with respect to . Although the sign on could be positive, it is more intuitive to think of the integral's result as that is missing some original extra function that was partially differentiated to zero.
Next, if
then the term should be a function only of and , since partial differentiation with respect to will hold constant and not produce any derivatives of . In the second-order equation
only the term is a term purely of and . Let . If , then
Since the total derivative of with respect to is equivalent to the implicit ordinary derivative , then
So,
and
Thus, the second-order differential equation
is exact only if and only if the below expression
is a function solely of . Once is calculated with its arbitrary constant, it is added to to make . If the equation is exact, then we can reduce to the first-order exact form which is solvable by the usual method for first-order exact equations.
Now, however, in the final implicit solution there will be a term from integration of with respect to twice as well as a , two arbitrary constants as expected from a second-order equation.
Given the differential equation
one can always easily check for exactness by examining the term. In this case, both the partial and total derivative of with respect to are , so their sum is , which is exactly the term in front of . With one of the conditions for exactness met, one can calculate that
Letting , then
So, is indeed a function only of and the second-order differential equation is exact. Therefore, and . Reduction to a first-order exact equation yields
Integrating with respect to yields
where is some arbitrary function of . Differentiating with respect to gives an equation correlating the derivative and the term.
So, and the full implicit solution becomes
Solving explicitly for yields
The concepts of exact differential equations can be extended to any order. Starting with the exact second-order equation
it was previously shown that equation is defined such that
Implicit differentiation of the exact second-order equation times will yield an th-order differential equation with new conditions for exactness that can be readily deduced from the form of the equation produced. For example, differentiating the above second-order differential equation once to yield a third-order exact equation gives the following form
where
and where is a function only of and . Combining all and terms not coming from gives
Thus, the three conditions for exactness for a third-order differential equation are: the term must be , the term must be and
must be a function solely of .
Consider the nonlinear third-order differential equation
If , then is and which together sum to . Fortunately, this appears in our equation. For the last condition of exactness,
which is indeed a function only of . So, the differential equation is exact. Integrating twice yields that . Rewriting the equation as a first-order exact differential equation yields
Integrating with respect to gives that . Differentiating with respect to and equating that to the term in front of in the first-order equation gives that and that . The full implicit solution becomes
The explicit solution, then, is
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the fundamental theorem of multivariate calculus.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In physics, Ginzburg–Landau theory, often called Landau–Ginzburg theory, named after Vitaly Ginzburg and Lev Landau, is a mathematical physical theory used to describe superconductivity. In its initial form, it was postulated as a phenomenological model which could describe type-I superconductors without examining their microscopic properties. One GL-type superconductor is the famous YBCO, and generally all cuprates.
In multivariate calculus, a differential or differential form is said to be exact or perfect, as contrasted with an inexact differential, if it is equal to the general differential for some differentiable function in an orthogonal coordinate system.
A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
In quantum mechanics, a spherically symmetric potential is a system of which the potential only depends on the radial distance from the spherical center and a location in space. A particle in a spherically symmetric potential will behave accordingly to said potential and can therefore be used as an approximation, for example, of the electron in a hydrogen atom or of the formation of chemical bonds.
In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly.
In mathematics, a first-order partial differential equation is a partial differential equation that involves the first derivatives of an unknown function of variables. The equation takes the form using subscript notation to denote the partial derivatives of .
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering. The direct scattering transform describes how a function scatters waves or generates bound-states. The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering. The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations.
In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In mathematics, the Beltrami equation, named after Eugenio Beltrami, is the partial differential equation
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.