Adomian decomposition method

Last updated

The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear [ disambiguation needed ] differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. [1] It is further extensible to stochastic systems by using the Ito integral. [2] The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. [3] The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion. [4]

Contents

Ordinary differential equations

Adomian method is well suited to solve Cauchy problems, an important class of problems which include initial conditions problems.

Application to a first order nonlinear system

An example of initial condition problem for an ordinary differential equation is the following:

To solve the problem, the highest degree differential operator (written here as L) is put on the left side, in the following way:

with L = d/dt and . Now the solution is assumed to be an infinite series of contributions:

Replacing in the previous expression, we obtain:

Now we identify y0 with some explicit expression on the right, and yi, i = 1, 2, 3, ..., with some expression on the right containing terms of lower order than i. For instance:

In this way, any contribution can be explicitly calculated at any order. If we settle for the four first terms, the approximant is the following:

Application to Blasius equation

A second example, with more complex boundary conditions is the Blasius equation for a flow in a boundary layer:

With the following conditions at the boundaries:

Linear and non-linear operators are now called and , respectively. Then, the expression becomes:

and the solution may be expressed, in this case, in the following simple way:

where: If:

and:

Adomian’s polynomials to linearize the non-linear term can be obtained systematically by using the following rule:

where:

Boundary conditions must be applied, in general, at the end of each approximation. In this case, the integration constants must be grouped into three final independent constants. However, in our example, the three constants appear grouped from the beginning in the form shown in the formal solution above. After applying the two first boundary conditions we obtain the so-called Blasius series:

To obtain γ we have to apply boundary conditions at ∞, which may be done by writing the series as a Padé approximant:

where L = M. The limit at of this expression is aL/bM.

If we choose b0 = 1, M linear equations for the b coefficients are obtained:

Then, we obtain the a coefficients by means of the following sequence:

In our example:

Which when γ = 0.0408 becomes:

with the limit:

Which is approximately equal to 1 (from boundary condition (3)) with an accuracy of 4/1000.

Partial differential equations

Application to a rectangular system with nonlinearity

One of the most frequent problems in physical sciences is to obtain the solution of a (linear or nonlinear) partial differential equation which satisfies a set of functional values on a rectangular boundary. An example is the following problem:

with the following boundary conditions defined on a rectangle:

This kind of partial differential equation appears frequently coupled with others in science and engineering. For instance, in the incompressible fluid flow problem, the Navier–Stokes equations must be solved in parallel with a Poisson equation for the pressure.

Decomposition of the system

Let us use the following notation for the problem (1):

where Lx, Ly are double derivate operators and N is a non-linear operator.

The formal solution of (2) is:

Expanding now u as a set of contributions to the solution we have:

By substitution in (3) and making a one-to-one correspondence between the contributions on the left side and the terms on the right side we obtain the following iterative scheme:

where the couple {an(y), bn(y)} is the solution of the following system of equations:

here is the nth-order approximant to the solution and N u has been consistently expanded in Adomian polynomials:

where and f(u) = u2 in the example (1).

Here C(ν, n) are products (or sum of products) of ν components of u whose subscripts sum up to n, divided by the factorial of the number of repeated subscripts. It is only a thumb-rule to order systematically the decomposition to be sure that all the combinations appearing are utilized sooner or later.

The is equal to the sum of a generalized Taylor series about u0. [1]

For the example (1) the Adomian polynomials are:

Other possible choices are also possible for the expression of An.

Series solutions

Cherruault established that the series terms obtained by Adomian's method approach zero as 1/(mn)! if m is the order of the highest linear differential operator and that . [5] With this method the solution can be found by systematically integrating along any of the two directions: in the x-direction we would use expression (3); in the alternative y-direction we would use the following expression:

where: c(x), d(x) is obtained from the boundary conditions at y = - yl and y = yl:

If we call the two respective solutions x-partial solution and y-partial solution, one of the most interesting consequences of the method is that the x-partial solution uses only the two boundary conditions (1-a) and the y-partial solution uses only the conditions (1-b).

Thus, one of the two sets of boundary functions {f1, f2} or {g1, g2} is redundant, and this implies that a partial differential equation with boundary conditions on a rectangle cannot have arbitrary boundary conditions on the borders, since the conditions at x = x1, x = x2 must be consistent with those imposed at y = y1 and y = y2.

An example to clarify this point is the solution of the Poisson problem with the following boundary conditions:

By using Adomian's method and a symbolic processor (such as Mathematica or Maple) it is easy to obtain the third order approximant to the solution. This approximant has an error lower than 5×1016 in any point, as it can be proved by substitution in the initial problem and by displaying the absolute value of the residual obtained as a function of (x, y). [6]

The solution at y = -0.25 and y = 0.25 is given by specific functions that in this case are:

and g2(x) = g1(x) respectively.

If a (double) integration is now performed in the y-direction using these two boundary functions the same solution will be obtained, which satisfy u(x=0, y) = 0 and u(x=0.5, y) = 0 and cannot satisfy any other condition on these borders.

Some people are surprised by these results; it seems strange that not all initial-boundary conditions must be explicitly used to solve a differential system. However, it is a well established fact that any elliptic equation has one and only one solution for any functional conditions in the four sides of a rectangle provided there is no discontinuity on the edges. The cause of the misconception is that scientists and engineers normally think in a boundary condition in terms of weak convergence in a Hilbert space (the distance to the boundary function is small enough to practical purposes). In contrast, Cauchy problems impose a point-to-point convergence to a given boundary function and to all its derivatives (and this is a quite strong condition!). For the first ones, a function satisfies a boundary condition when the area (or another functional distance) between it and the true function imposed in the boundary is so small as desired; for the second ones, however, the function must tend to the true function imposed in any and every point of the interval.

The commented Poisson problem does not have a solution for any functional boundary conditions f1, f2, g1, g2; however, given f1, f2 it is always possible to find boundary functions g1*, g2* so close to g1, g2 as desired (in the weak convergence meaning) for which the problem has solution. This property makes it possible to solve Poisson's and many other problems with arbitrary boundary conditions but never for analytic functions exactly specified on the boundaries. The reader can convince himself (herself) of the high sensitivity of PDE solutions to small changes in the boundary conditions by solving this problem integrating along the x-direction, with boundary functions slightly different even though visually not distinguishable. For instance, the solution with the boundary conditions:

at x = 0 and x = 0.5, and the solution with the boundary conditions:

at x = 0 and x = 0.5, produce lateral functions with different sign convexity even though both functions are visually not distinguishable.

Solutions of elliptic problems and other partial differential equations are highly sensitive to small changes in the boundary function imposed when only two sides are used. And this sensitivity is not easily compatible with models that are supposed to represent real systems, which are described by means of measurements containing experimental errors and are normally expressed as initial-boundary value problems in a Hilbert space.

Improvements to the decomposition method

At least three methods have been reported [6] [7] [8] to obtain the boundary functions g1*, g2* that are compatible with any lateral set of conditions {f1, f2} imposed. This makes it possible to find the analytical solution of any PDE boundary problem on a closed rectangle with the required accuracy, so allowing to solve a wide range of problems that the standard Adomian's method was not able to address.

The first one perturbs the two boundary functions imposed at x = 0 and x = x1 (condition 1-a) with a Nth-order polynomial in y: p1, p2 in such a way that: f1' = f1 + p1, f2' = f2 + p2, where the norm of the two perturbation functions are smaller than the accuracy needed at the boundaries. These p1, p2 depend on a set of polynomial coefficients ci, i = 1, ..., N. Then, the Adomian method is applied and functions are obtained at the four boundaries which depend on the set of ci, i = 1, ..., N. Finally, a boundary function F(c1, c2, ..., cN) is defined as the sum of these four functions, and the distance between F(c1, c2, ..., cN) and the real boundary functions ((1-a) and (1-b)) is minimized. The problem has been reduced, in this way, to the global minimization of the function F(c1, c2, ..., cN) which has a global minimum for some combination of the parameters ci, i = 1, ..., N. This minimum may be found by means of a genetic algorithm or by using some other optimization method, as the one proposed by Cherruault (1999). [9]

A second method to obtain analytic approximants of initial-boundary problems is to combine Adomian decomposition with spectral methods. [7]

Finally, the third method proposed by García-Olivares is based on imposing analytic solutions at the four boundaries, but modifying the original differential operator in such a way that it is different from the original one only in a narrow region close to the boundaries, and it forces the solution to satisfy exactly analytic conditions at the four boundaries. [8]

Integral Equations

The Adomian decomposition method may also be applied to linear and nonlinear integral equations to obtain solutions. [10] This corresponds to the fact that many differential equation can be converted into integral equations. [10]

Adomian Decomposition Method

The Adomian decomposition method for nonhomogenous Fredholm integral equation of the second kind goes as follows: [10]

Given an integral equation of the form:

We assume we may express the solution in series form:

Plugging the series form into the integral equation then yields:

Assuming that the sum converges absolutely to we may integerchange the sum and integral as follows:

Expanding the sum on both sides yields:

Hence we may associate each in the following recurrent manner:

which gives us the solution in the solution form above.

Example

Given the Fredholm integral equation:

Since , we can set:

...

Hence the solution may be written as:

Since this is a telescoping series, we can see that every terms after cancels and may be regarded as "noise", [10] Thus, becomes:

Dym equation Adomian cos plot.gif Burgers Fisher equation tanh Adomian plot.gif Kuramoto-Sivashinsky equation Adomian solution sin plot.gif

See also

Related Research Articles

<span class="mw-page-title-main">Lorentz force</span> Force acting on charged particles in electric and magnetic fields

In physics, specifically in electromagnetism, the Lorentz force is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge q moving with a velocity v in an electric field E and a magnetic field B experiences a force of

<span class="mw-page-title-main">Wave equation</span> Differential equation important in physics

The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

<span class="mw-page-title-main">Partial differential equation</span> Type of differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.

In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

The Vlasov equation is a differential equation describing time evolution of the distribution function of plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph.

<span class="mw-page-title-main">Stokes flow</span> Type of fluid flow

Stokes flow, also named creeping flow or creeping motion, is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. . This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication. In nature, this type of flow occurs in the swimming of microorganisms and sperm. In technology, it occurs in paint, MEMS devices, and in the flow of viscous polymers generally.

<span class="mw-page-title-main">Euler–Bernoulli beam theory</span> Method for load calculation in construction

Euler–Bernoulli beam theory is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution.

In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.

The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.

In nonideal fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section. It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.

In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.

<span class="mw-page-title-main">Stokes' theorem</span> Theorem in vector calculus

Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.

Triple-deck theory is a theory that describes a three-layered boundary-layer structure when sufficiently large disturbances are present in the boundary layer. This theory is able to successfully explain the phenomenon of boundary layer separation, but it has found applications in many other flow setups as well, including the scaling of the lower-branch instability (T-S) of the Blasius flow, boundary layers in swirling flows, etc. James Lighthill, Lev Landau and others were the first to realize that to explain boundary layer separation, different scales other than the classical boundary-layer scales need to be introduced. These scales were first introduced independently by James Lighthill and E. A. Müller in 1953. The triple-layer structure itself was independently discovered by Keith Stewartson (1969) and V. Y. Neiland (1969) and by A. F. Messiter (1970). Stewartson and Messiter considered the separated flow near the trailing edge of a flat plate, whereas Neiland studied the case of a shock impinging on a boundary layer.

References

  1. 1 2 Adomian, G. (1994). Solving Frontier problems of Physics: The decomposition method. Kluwer Academic Publishers.
  2. Adomian, G. (1986). Nonlinear Stochastic Operator Equations . Kluwer Academic Publishers. ISBN   978-0-12-044375-8.
  3. Liao, S.J. (2012), Homotopy Analysis Method in Nonlinear Differential Equation, Berlin & Beijing: Springer & Higher Education Press, ISBN   978-3642251313
  4. Wazwaz, Abdul-Majid (2009). Partial Differential Equations and Solitary Waves Theory. Higher Education Press. p. 15. ISBN   978-90-5809-369-1.
  5. Cherruault, Y. (1989), "Convergence of Adomian's Method", Kybernetes, 18 (2): 31–38, doi:10.1108/eb005812
  6. 1 2 García-Olivares, A. (2003), "Analytic solution of partial differential equations with Adomian's decomposition", Kybernetes, 32 (3): 354–368, doi:10.1108/03684920310458584
  7. 1 2 García-Olivares, A. (2002), "Analytical approximants of time-dependent partial differential equations with tau methods", Mathematics and Computers in Simulation, 61: 35–45, doi:10.1016/s0378-4754(02)00133-7, hdl: 10261/51182
  8. 1 2 García-Olivares, A. (2003), "Analytical solution of nonlinear partial differential equations of physics", Kybernetes, 32 (4): 548–560, doi:10.1108/03684920310463939, hdl: 10261/51176 [DOI: 10.1108/03684920310463939]
  9. Cherruault, Y. (1999). Optimization, Méthodes locales et globales. Presses Universitaires de France. ISBN   978-2-13-049910-7.
  10. 1 2 3 4 Wazwaz, Abdul-majid (2015). First Course In Integral Equations, A. World Scientific Publishing Company. ISBN   978-981-4675-16-1. OCLC   1020691303.