This biographical article is written like a résumé .(May 2020) |
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
The HAM was first devised in 1992 by Liao Shijun of Shanghai Jiaotong University in his PhD dissertation [1] and further modified [2] in 1997 to introduce a non-zero auxiliary parameter, referred to as the convergence-control parameter, c0, to construct a homotopy on a differential system in general form. [3] The convergence-control parameter is a non-physical variable that provides a simple way to verify and enforce convergence of a solution series. The capability of the HAM to naturally show convergence of the series solution is unusual in analytical and semi-analytic approaches to nonlinear partial differential equations.
The HAM distinguishes itself from various other analytical methods in four important aspects. First, it is a series expansion method that is not directly dependent on small or large physical parameters. Thus, it is applicable for not only weakly but also strongly nonlinear problems, going beyond some of the inherent limitations of the standard perturbation methods. Second, the HAM is a unified method for the Lyapunov artificial small parameter method, the delta expansion method, the Adomian decomposition method, [4] and the homotopy perturbation method. [5] [6] The greater generality of the method often allows for strong convergence of the solution over larger spatial and parameter domains. Third, the HAM gives excellent flexibility in the expression of the solution and how the solution is explicitly obtained. It provides great freedom to choose the basis functions of the desired solution and the corresponding auxiliary linear operator of the homotopy. Finally, unlike the other analytic approximation techniques, the HAM provides a simple way to ensure the convergence of the solution series.
The homotopy analysis method is also able to combine with other techniques employed in nonlinear differential equations such as spectral methods [7] and Padé approximants. It may further be combined with computational methods, such as the boundary element method to allow the linear method to solve nonlinear systems. Different from the numerical technique of homotopy continuation, the homotopy analysis method is an analytic approximation method as opposed to a discrete computational method. Further, the HAM uses the homotopy parameter only on a theoretical level to demonstrate that a nonlinear system may be split into an infinite set of linear systems which are solved analytically, while the continuation methods require solving a discrete linear system as the homotopy parameter is varied to solve the nonlinear system.
In the last twenty years, the HAM has been applied to solve a growing number of nonlinear ordinary/partial differential equations in science, finance, and engineering. [8] [9] For example, multiple steady-state resonant waves in deep and finite water depth [10] were found with the wave resonance criterion of arbitrary number of traveling gravity waves; this agreed with Phillips' criterion for four waves with small amplitude. Further, a unified wave model applied with the HAM, [11] admits not only the traditional smooth progressive periodic/solitary waves, but also the progressive solitary waves with peaked crest in finite water depth. This model shows peaked solitary waves are consistent solutions along with the known smooth ones. Additionally, the HAM has been applied to many other nonlinear problems such as nonlinear heat transfer, [12] the limit cycle of nonlinear dynamic systems, [13] the American put option, [14] the exact Navier–Stokes equation, [15] the option pricing under stochastic volatility, [16] the electrohydrodynamic flows, [17] the Poisson–Boltzmann equation for semiconductor devices, [18] and others.
Consider a general nonlinear differential equation
where is a nonlinear operator. Let denote an auxiliary linear operator, u0(x) an initial guess of u(x), and c0 a constant (called the convergence-control parameter), respectively. Using the embedding parameter q ∈ [0,1] from homotopy theory, one may construct a family of equations,
called the zeroth-order deformation equation, whose solution varies continuously with respect to the embedding parameter q ∈ [0,1]. This is the linear equation
with known initial guess U(x; 0) = u0(x) when q = 0, but is equivalent to the original nonlinear equation , when q = 1, i.e. U(x; 1) = u(x)). Therefore, as q increases from 0 to 1, the solution U(x; q) of the zeroth-order deformation equation varies (or deforms) from the chosen initial guess u0(x) to the solution u(x) of the considered nonlinear equation.
Expanding U(x; q) in a Taylor series about q = 0, we have the homotopy-Maclaurin series
Assuming that the so-called convergence-control parameter c0 of the zeroth-order deformation equation is properly chosen that the above series is convergent at q = 1, we have the homotopy-series solution
From the zeroth-order deformation equation, one can directly derive the governing equation of um(x)
called the mth-order deformation equation, where and for k > 1, and the right-hand side Rm is dependent only upon the known results u0, u1, ..., um − 1 and can be obtained easily using computer algebra software. In this way, the original nonlinear equation is transferred into an infinite number of linear ones, but without the assumption of any small/large physical parameters.
Since the HAM is based on a homotopy, one has great freedom to choose the initial guess u0(x), the auxiliary linear operator , and the convergence-control parameter c0 in the zeroth-order deformation equation. Thus, the HAM provides the mathematician freedom to choose the equation-type of the high-order deformation equation and the base functions of its solution. The optimal value of the convergence-control parameter c0 is determined by the minimum of the squared residual error of governing equations and/or boundary conditions after the general form has been solved for the chosen initial guess and linear operator. Thus, the convergence-control parameter c0 is a simple way to guarantee the convergence of the homotopy series solution and differentiates the HAM from other analytic approximation methods. The method overall gives a useful generalization of the concept of homotopy.
The HAM is an analytic approximation method designed for the computer era with the goal of "computing with functions instead of numbers." In conjunction with a computer algebra system such as Mathematica or Maple, one can gain analytic approximations of a highly nonlinear problem to arbitrarily high order by means of the HAM in only a few seconds. Inspired by the recent successful applications of the HAM in different fields, a Mathematica package based on the HAM, called BVPh, has been made available online for solving nonlinear boundary-value problems . BVPh is a solver package for highly nonlinear ODEs with singularities, multiple solutions, and multipoint boundary conditions in either a finite or an infinite interval, and includes support for certain types of nonlinear PDEs. [8] Another HAM-based Mathematica code, APOh, has been produced to solve for an explicit analytic approximation of the optimal exercise boundary of American put option, which is also available online .
The HAM has recently been reported to be useful for obtaining analytical solutions for nonlinear frequency response equations. Such solutions are able to capture various nonlinear behaviors such as hardening-type, softening-type or mixed behaviors of the oscillator. [19] [20] These analytical equations are also useful in prediction of chaos in nonlinear systems. [21]
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones.
In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.
Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals.
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:
The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.
The Duffing equation, named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by
The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space.
Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations,
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method.
In fluid dynamics, a Stokes wave is a nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth. This type of modelling has its origins in the mid 19th century when Sir George Stokes – using a perturbation series approach, now known as the Stokes expansion – obtained approximate solutions for nonlinear wave motion.
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.
Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems. It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integration methods.
In mathematics, Anderson acceleration, also called Anderson mixing, is a method for the acceleration of the convergence rate of fixed-point iterations. Introduced by Donald G. Anderson, this technique can be used to find the solution to fixed point equations often arising in the field of computational science.
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.