Integrating factor

Last updated

In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential (which can then be integrated to give a scalar field). This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential.

Contents

Use

An integrating factor is any expression that a differential equation is multiplied by to facilitate integration. For example, the nonlinear second order equation

admits as an integrating factor:

To integrate, note that both sides of the equation may be expressed as derivatives by going backwards with the chain rule:

Therefore,

where is a constant.

This form may be more useful, depending on application. Performing a separation of variables will give

This is an implicit solution which involves a nonelementary integral. This same method is used to solve the period of a simple pendulum.

Solving first order linear ordinary differential equations

Integrating factors are useful for solving ordinary differential equations that can be expressed in the form

The basic idea is to find some function, say , called the "integrating factor", which we can multiply through our differential equation in order to bring the left-hand side under a common derivative. For the canonical first-order linear differential equation shown above, the integrating factor is .

Note that it is not necessary to include the arbitrary constant in the integral, or absolute values in case the integral of involves a logarithm. Firstly, we only need one integrating factor to solve the equation, not all possible ones; secondly, such constants and absolute values will cancel out even if included. For absolute values, this can be seen by writing , where refers to the sign function, which will be constant on an interval if is continuous. As is undefined when , and a logarithm in the antiderivative only appears when the original function involved a logarithm or a reciprocal (neither of which are defined for 0), such an interval will be the interval of validity of our solution.

To derive this, let be the integrating factor of a first order linear differential equation such that multiplication by transforms a partial derivative into a total derivative, then:

Going from step 2 to step 3 requires that , which is a separable differential equation, whose solution yields in terms of :

To verify, multiplying by gives

By applying the product rule in reverse, we see that the left-hand side can be expressed as a single derivative in

We use this fact to simplify our expression to

Integrating both sides with respect to

where is a constant.

Moving the exponential to the right-hand side, the general solution to Ordinary Differential Equation is:

In the case of a homogeneous differential equation, and the general solution to Ordinary Differential Equation is:

.

for example, consider the differential equation

We can see that in this case

Multiplying both sides by we obtain

The above equation can be rewritten as

By integrating both sides with respect to x we obtain

or

The same result may be achieved using the following approach

Reversing the quotient rule gives

or

or

where is a constant.

Solving second order linear ordinary differential equations

The method of integrating factors for first order equations can be naturally extended to second order equations as well. The main goal in solving first order equations was to find an integrating factor such that multiplying by it would yield , after which subsequent integration and division by would yield . For second order linear differential equations, if we want to work as an integrating factor, then

This implies that a second order equation must be exactly in the form for the integrating factor to be usable.

Example 1

For example, the differential equation

can be solved exactly with integrating factors. The appropriate can be deduced by examining the term. In this case, , so . After examining the term, we see that we do in fact have , so we will multiply all terms by the integrating factor . This gives us

which can be rearranged to give

Integrating twice yields

Dividing by the integrating factor gives:

Example 2

A slightly less obvious application of second order integrating factors involves the following differential equation:

At first glance, this is clearly not in the form needed for second order integrating factors. We have a term in front of but no in front of . However,

and from the Pythagorean identity relating cotangent and cosecant,

so we actually do have the required term in front of and can use integrating factors.

Multiplying each term by gives

which rearranged is

Integrating twice gives

Finally, dividing by the integrating factor gives

Solving nth order linear differential equations

Integrating factors can be extended to any order, though the form of the equation needed to apply them gets more and more specific as order increases, making them less useful for orders 3 and above. The general idea is to differentiate the function times for an th order differential equation and combine like terms. This will yield an equation in the form

If an th order equation matches the form that is gotten after differentiating times, one can multiply all terms by the integrating factor and integrate times, dividing by the integrating factor on both sides to achieve the final result.

Example

A third order usage of integrating factors gives

thus requiring our equation to be in the form

For example in the differential equation

we have , so our integrating factor is . Rearranging gives

Integrating thrice and dividing by the integrating factor yields

See also

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

<span class="mw-page-title-main">Trigonometric functions</span> Functions of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.

<span class="mw-page-title-main">Hyperbolic functions</span> Collective name of 6 mathematical functions

In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and +sinh(t) respectively.

Lambert <i>W</i> function

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function.

In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards".

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Arcsin, arccos, arctan, etc

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

<span class="mw-page-title-main">Separation of variables</span> Technique for solving differential equations

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.

In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation

<span class="mw-page-title-main">Bernoulli differential equation</span> Type of ordinary differential equation

In mathematics, an ordinary differential equation is called a Bernoulli differential equation if it is of the form

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

<span class="mw-page-title-main">Bispherical coordinates</span>

Bispherical coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that connects the two foci. Thus, the two foci and in bipolar coordinates remain points in the bispherical coordinate system.

In mathematics, the binomial differential equation is an ordinary differential equation containing one or more functions of one independent variable and the derivatives of those functions.

In discrete calculus the indefinite sum operator, denoted by or , is the linear operator, inverse of the forward difference operator . It relates to the forward difference operator as the indefinite integral relates to the derivative. Thus

In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of trigonometric functions of into an ordinary rational function of by setting . This is the one-dimensional stereographic projection of the unit circle parametrized by angle measure onto the real line. The general transformation formula is: