Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.
A differential equation for the unknown will be separable if it can be written in the form
where and are given functions. This is perhaps more transparent when written using as:
So now as long as h(y) ≠ 0, we can rearrange terms to obtain:
where the two variables x and y have been separated. Note dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.
Those who dislike Leibniz's notation may prefer to write this as
but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to , we have
(A1) |
or equivalently,
because of the substitution rule for integrals.
If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.
(Note that we do not need to use two constants of integration, in equation ( A1 ) as in
because a single constant is equivalent.)
Population growth is often modeled by the "logistic" differential equation
where is the population with respect to time , is the rate of growth, and is the carrying capacity of the environment. Separation of variables now leads to
which is readily integrated using partial fractions on the left side yielding
where A is the constant of integration. We can find in terms of at t=0. Noting we get
Much like one can speak of a separable first-order ODE, one can speak of a separable second-order, third-order or nth-order ODE. Consider the separable first-order ODE:
The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function, y:
Thus, when one separates variables for first-order equations, one in fact moves the dx denominator of the operator to the side with the x variable, and the d(y) is left on the side with the y variable. The second-derivative operator, by analogy, breaks down as follows:
The third-, fourth- and nth-derivative operators break down in the same way. Thus, much like a first-order separable ODE is reducible to the form
a separable second-order ODE is reducible to the form
and an nth-order separable ODE is reducible to
Consider the simple nonlinear second-order differential equation:This equation is an equation only of y'' and y', meaning it is reducible to the general form described above and is, therefore, separable. Since it is a second-order separable equation, collect all x variables on one side and all y' variables on the other to get:Now, integrate the right side with respect to x and the left with respect to y':This giveswhich simplifies to:This is now a simple integral problem that gives the final answer:
The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as the heat equation, wave equation, Laplace equation, Helmholtz equation and biharmonic equation.
The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations. [1]
Consider the one-dimensional heat equation. The equation is
(1) |
The variable u denotes temperature. The boundary condition is homogeneous, that is
(2) |
Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
(3) |
Substituting u back into equation ( 1 ) and using the product rule,
(4) |
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
(5) |
and
(6) |
−λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.
We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:
Suppose that λ < 0. Then there exist real numbers B, C such that
From ( 2 ) we get
(7) |
and therefore B = 0 = C which implies u is identically 0.
Suppose that λ = 0. Then there exist real numbers B, C such that
From ( 7 ) we conclude in the same manner as in 1 that u is identically 0.
Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that
and
From ( 7 ) we get C = 0 and that for some positive integer n,
This solves the heat equation in the special case that the dependence of u has the special form of ( 3 ).
In general, the sum of solutions to ( 1 ) which satisfy the boundary conditions ( 2 ) also satisfies ( 1 ) and ( 3 ). Hence a complete solution can be given as
where Dn are coefficients determined by initial condition.
Given the initial condition
we can get
This is the sine series expansion of f(x) which is amenable to Fourier analysis. Multiplying both sides with and integrating over [0, L] results in
This method requires that the eigenfunctions X, here , are orthogonal and complete. In general this is guaranteed by Sturm–Liouville theory.
Suppose the equation is nonhomogeneous,
(8) |
with the boundary condition the same as ( 2 ).
Expand h(x,t), u(x,t) and f(x) into
(9) |
(10) |
(11) |
where hn(t) and bn can be calculated by integration, while un(t) is to be determined.
Substitute ( 9 ) and ( 10 ) back to ( 8 ) and considering the orthogonality of sine functions we get
which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get
If the boundary condition is nonhomogeneous, then the expansion of ( 9 ) and ( 10 ) is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.
For some equations involving mixed derivatives, the equation does not separate as easily as the heat equation did in the first example above, but nonetheless separation of variables may still be applied. Consider the two-dimensional biharmonic equation
Proceeding in the usual manner, we look for solutions of the form
and we obtain the equation
Writing this equation in the form
Taking the derivative of this expression with respect to gives which means or and likewise, taking derivative with respect to leads to and thus or , hence either F(x) or G(y) must be a constant, say −λ. This further implies that either or are constant. Returning to the equation for X and Y, we have two cases
and
which can each be solved by considering the separate cases for and noting that .
In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.
For many PDEs, such as the wave equation, Helmholtz equation and Schrödinger equation, the applicability of separation of variables is a result of the spectral theorem. In some cases, separation of variables may not be possible. Separation of variables may be possible in some coordinate systems but not others, [2] and which coordinate systems allow for separation depends on the symmetry properties of the equation. [3] Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above).
Consider an initial boundary value problem for a function on in two variables:
where is a differential operator with respect to and is a differential operator with respect to with boundary data:
where is a known function.
We look for solutions of the form . Dividing the PDE through by gives
The right hand side depends only on and the left hand side only on so both must be equal to a constant , which gives two ordinary differential equations
which we can recognize as eigenvalue problems for the operators for and . If is a compact, self-adjoint operator on the space along with the relevant boundary conditions, then by the Spectral theorem there exists a basis for consisting of eigenfunctions for . Let the spectrum of be and let be an eigenfunction with eigenvalue . Then for any function which at each time is square-integrable with respect to , we can write this function as a linear combination of the . In particular, we know the solution can be written as
For some functions . In the separation of variables, these functions are given by solutions to
Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions.
For many differential operators, such as , we can show that they are self-adjoint by integration by parts. While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero). [4]
The matrix form of the separation of variables is the Kronecker sum.
As an example we consider the 2D discrete Laplacian on a regular grid:
where and are 1D discrete Laplacians in the x- and y-directions, correspondingly, and are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.
Some mathematical programs are able to do separation of variables: Xcas [5] among others.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:where may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:
In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, though in general characteristic curves can also be found for hyperbolic and parabolic partial differential equation. The method is to reduce a partial differential equation (PDE) to a family of ordinary differential equations (ODE) along which the solution can be integrated from some initial data given on a suitable hypersurface.
In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.
In mathematics, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering. The direct scattering transform describes how a function scatters waves or generates bound-states. The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering. The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations.
A differential equation can be homogeneous in either of two respects.
In classical mechanics, holonomic constraints are relations between the position variables that can be expressed in the following form:
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
The Beltrami identity, named after Eugenio Beltrami, is a special case of the Euler–Lagrange equation in the calculus of variations.
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
The Fokas method, or unified transform, is an algorithmic procedure for analysing boundary value problems for linear partial differential equations and for an important class of nonlinear PDEs belonging to the so-called integrable systems. It is named after Greek mathematician Athanassios S. Fokas.
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.