Integrating factor

Last updated

In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve non-exact ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential (which can then be integrated to give a scalar field). This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential.

Contents

Use

An integrating factor is any expression that a differential equation is multiplied by to facilitate integration. For example, the nonlinear second order equation

admits as an integrating factor:

To integrate, note that both sides of the equation may be expressed as derivatives by going backwards with the chain rule:

Therefore,

where is a constant.

This form may be more useful, depending on application. Performing a separation of variables will give

This is an implicit solution which involves a nonelementary integral. This same method is used to solve the period of a simple pendulum.

Solving first order linear ordinary differential equations

Integrating factors are useful for solving ordinary differential equations that can be expressed in the form

The basic idea is to find some function, say , called the "integrating factor", which we can multiply through our differential equation in order to bring the left-hand side under a common derivative. For the canonical first-order linear differential equation shown above, the integrating factor is .

Note that it is not necessary to include the arbitrary constant in the integral, or absolute values in case the integral of involves a logarithm. Firstly, we only need one integrating factor to solve the equation, not all possible ones; secondly, such constants and absolute values will cancel out even if included. For absolute values, this can be seen by writing , where refers to the sign function, which will be constant on an interval if is continuous. As is undefined when , and a logarithm in the antiderivative only appears when the original function involved a logarithm or a reciprocal (neither of which are defined for 0), such an interval will be the interval of validity of our solution.

To derive this, let be the integrating factor of a first order linear differential equation such that multiplication by transforms a non-integrable expression into an integrable derivative, then:

Going from step 2 to step 3 requires that , which is a separable differential equation, whose solution yields in terms of :

To verify, multiplying by gives

By applying the product rule in reverse, we see that the left-hand side can be expressed as a single derivative in

We use this fact to simplify our expression to

Integrating both sides with respect to

where is a constant.

Moving the exponential to the right-hand side, the general solution to Ordinary Differential Equation is:

In the case of a homogeneous differential equation, and the general solution to Ordinary Differential Equation is:

.

for example, consider the differential equation

We can see that in this case

Multiplying both sides by we obtain

The above equation can be rewritten as

By integrating both sides with respect to x we obtain

or

The same result may be achieved using the following approach

Reversing the quotient rule gives

or

or

where is a constant.

Solving second order linear ordinary differential equations

The method of integrating factors for first order equations can be naturally extended to second order equations as well. The main goal in solving first order equations was to find an integrating factor such that multiplying by it would yield , after which subsequent integration and division by would yield . For second order linear differential equations, if we want to work as an integrating factor, then

This implies that a second order equation must be exactly in the form for the integrating factor to be usable.

Example 1

For example, the differential equation

can be solved exactly with integrating factors. The appropriate can be deduced by examining the term. In this case, , so . After examining the term, we see that we do in fact have , so we will multiply all terms by the integrating factor . This gives us

which can be rearranged to give

Integrating twice yields

Dividing by the integrating factor gives:

Example 2

A slightly less obvious application of second order integrating factors involves the following differential equation:

At first glance, this is clearly not in the form needed for second order integrating factors. We have a term in front of but no in front of . However,

and from the Pythagorean identity relating cotangent and cosecant,

so we actually do have the required term in front of and can use integrating factors.

Multiplying each term by gives

which rearranged is

Integrating twice gives

Finally, dividing by the integrating factor gives

Solving nth order linear differential equations

Integrating factors can be extended to any order, though the form of the equation needed to apply them gets more and more specific as order increases, making them less useful for orders 3 and above. The general idea is to differentiate the function times for an th order differential equation and combine like terms. This will yield an equation in the form

If an th order equation matches the form that is gotten after differentiating times, one can multiply all terms by the integrating factor and integrate times, dividing by the integrating factor on both sides to achieve the final result.

Example

A third order usage of integrating factors gives

thus requiring our equation to be in the form

For example in the differential equation

we have , so our integrating factor is . Rearranging gives

Integrating thrice and dividing by the integrating factor yields

See also

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .

<span class="mw-page-title-main">Trigonometric functions</span> Functions of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.

<span class="mw-page-title-main">Hyperbolic functions</span> Collective name of 6 mathematical functions

In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and +sinh(t) respectively.

Lambert <i>W</i> function Multivalued function in mathematics

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.

In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of sin, cos, tan, etc.

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:

In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.

In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation

<span class="mw-page-title-main">Bipolar coordinates</span> 2-dimensional orthogonal coordinate system based on Apollonian circles

Bipolar coordinates are a two-dimensional orthogonal coordinate system based on the Apollonian circles. There is also a third system, based on two poles.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.

In mathematics, the binomial differential equation is an ordinary differential equation of the form where is a natural number and is a polynomial that is analytic in both variables.

In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of trigonometric functions of into an ordinary rational function of by setting . This is the one-dimensional stereographic projection of the unit circle parametrized by angle measure onto the real line. The general transformation formula is:

References