Method of dominant balance

Last updated

In mathematics, the method of dominant balance approximates the solution to an equation by solving a simplified form of the equation containing 2 or more of the equation's terms that most influence (dominate) the solution and excluding terms contributing only small modifications to this approximate solution. Following an initial solution, iteration of the procedure may generate additional terms of an asymptotic expansion providing a more accurate solution. [1] [2]

Contents

An early example of the dominant balance method is the Newton polygon method. Newton developed this method to find an explicit approximation for a function defined implicitly by an algebraic equation. He expressed the function as proportional to the independent variable raised to a power, retained only the lowest-degree polynomial terms (dominant terms) arising from this approximation, and solved this simplified reduced equation to obtain an approximate solution. [3] [4] Dominant balance has a broad range of applications, solving differential equations arising in fluid mechanics, plasma physics, turbulence, combustion, nonlinear optics, geophysical fluid dynamics, and neuroscience. [5] [6]

Asymptotic relations

The functions and of parameter or independent variable and the quotient have limits as approaches .

The function is much less than as approaches , written as , if the limit of the quotient is zero as approaches [7]

.

The relation is lower order than as approaches , written using little-o notation , is identical to the is much less than as approaches relation. [7]

The function is equivalent to as approaches , written as , if the limit of the quotient is one as approaches [7]

.

This result indicates that the zero function, , can never be equivalent to any other function. [7]

Asymptotically equivalent functions remain asymptotically equivalent under integration if requirements related to convergence are met. There are more specific requirements for asymptotically equivalent functions to remain asymptotically equivalent under differentiation. [8]

Equation properties

Dominant balance applies to a minimum 3-term equation containing a function .

Balance terms and means make these terms equal and asymptotically equivalent by finding the function that solves the reduced equation with and . [9]

A solution is consistent if terms and are dominant; dominant means all other equation terms are much less than terms and as approaches . [10] [11] A consistent solution that balances two equation terms may generate an accurate approximation to the full equation's solution for values neighboring . [11] [12] Approximate solutions arising from balancing different terms of an equation may generate distinct approximate solutions e.g. inner and outer layer solutions. [5]

Substituting the scaled function into the equation and taking the limit as approaches may generate simplified reduced equations for distinct exponent values of . [9] These simplified equations are called distinguished limits and identify balanced dominant equation terms. [13] Scaled functions are often used when attempting to balance equation term containing factor and term containing factor with . Scaled functions are applied to differential equations when is an equation parameter, not the differential equation´s independent variable. [5] The Kruskal-Newton diagram facilitates identifying the required scaled functions needed for dominant balance of algebraic and differential equations. [5]

For differential equation solutions containing an irregular singularity, the leading behavior is the first term of an asymptotic series solution that remains when the independent variable approaches an irregular singularity . The controlling factor is the fastest changing part of the leading behavior. It is advised to "show that the equation for the function obtained by factoring off the dominant balance solution from the exact solution itself has a solution that varies less rapidly than the dominant balance solution." [11]

Algorithm

Improved accuracy

Examples

Algebraic function

The dominant balance method will find an explicit approximate expression for the multi-valued function defined by the equation for near zero. [14]

Balance 1st and 2nd terms

  • Select and .
  • Scaled function is unnecessary.
  • Solve reduced equation
  • Verify consistency for
  • Accept solution

Balance 2nd and 3rd terms

  • Select and .
  • Apply scaled function
  • Transformed equation
  • Solve reduced equation
  • Verify consistency for
  • Accept solutions

Balance 1st and 3rd terms

The consistency condition fails for balance of 1st and 3rd terms.

Perturbation series solution

The approximate solutions are the first terms in the perturbation series solutions. [14]

Differential equation

The differential equation is known to have a solution with an exponential leading term. [15] The transformation leads to the differential equation . The dominant balance method will find an approximate solution for near 0. Scaled functions will not be used because is the differential equation's independent variable, not a differential equation parameter. [10]

Find 1-term solution

.
.
Balance 1st and 2nd terms
  • Select and .
  • Solve reduced equation
  • Verify consistency for .
  • Accept solution
Balance 1st and 3rd terms
  • Select and
  • Solve reduced equation
  • Not consistent for .
  • Reject solution .
Balance 2nd and 3rd terms
  • Select and .
  • Solve reduced equation .
  • Not consistent and for .
  • Reject solution

Find 2-term solution

.
.
.
.
Balance 1st and 2nd terms
  • Select and .
  • Solve reduced equation .
  • Verify consistency
for
for
  • Accept solutions [10]
Balance other terms

The consistency condition fails for balance of other terms. [10]

Asymptotic expansion

The next iteration generates a solution with and this means that an asymptotic expansion can represent the remainder of the solution. [10] The dominant balance method generates the leading term to this asymptotic expansion with constant and expansion coefficients determined by substitution into the full equation. [10]

A partial sum of this non-convergent series generates an approximate solution. The leading term corresponds to the Liouville-Green (LG) or Wentzel–Kramers–Brillouin (WKB) approximation. [15]

Citations

Related Research Articles

<span class="mw-page-title-main">Gamma function</span> Extension of the factorial function

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n,

Lambert <i>W</i> function Multivalued function in mathematics

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.

<span class="mw-page-title-main">Error function</span> Sigmoid shape special function

In mathematics, the error function, often denoted by erf, is a function defined as:

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Gudermannian function</span> Mathematical function relating circular and hyperbolic functions

In mathematics, the Gudermannian function relates a hyperbolic angle measure to a circular angle measure called the gudermannian of and denoted . The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830. The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude when parameter

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of sin, cos, tan, etc.

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

<span class="mw-page-title-main">Airy function</span> Special function in the physical sciences

In the physical sciences, the Airy function (or Airy function of the first kind) Ai(x) is a special function named after the British astronomer George Biddell Airy (1801–1892). The function Ai(x) and the related function Bi(x), are linearly independent solutions to the differential equation

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.

<span class="mw-page-title-main">Fisher transformation</span> Statistical transformation

In statistics, the Fisher transformation of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ. The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

<span class="mw-page-title-main">Hyperbolic angle</span> Argument of the hyperbolic functions

In geometry, hyperbolic angle is a real number determined by the area of the corresponding hyperbolic sector of xy = 1 in Quadrant I of the Cartesian plane. The hyperbolic angle parametrises the unit hyperbola, which has hyperbolic functions as coordinates. In mathematics, hyperbolic angle is an invariant measure as it is preserved under hyperbolic rotation.

<span class="mw-page-title-main">Confluent hypergeometric function</span> Solution of a confluent hypergeometric equation

In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:

<span class="mw-page-title-main">Parabolic cylinder function</span>

In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation

In mathematics, Machin-like formulas are a popular technique for computing π to a large number of digits. They are generalizations of John Machin's formula from 1706:

<span class="mw-page-title-main">Lemniscate elliptic functions</span> Mathematical functions

In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In mathematics, a Lamé function, or ellipsoidal harmonic function, is a solution of Lamé's equation, a second-order ordinary differential equation. It was introduced in the paper. Lamé's equation appears in the method of separation of variables applied to the Laplace equation in elliptic coordinates. In some special cases solutions can be expressed in terms of polynomials called Lamé polynomials.

<span class="mw-page-title-main">Cnoidal wave</span> Nonlinear and exact periodic wave solution of the Korteweg–de Vries equation

In fluid dynamics, a cnoidal wave is a nonlinear and exact periodic wave solution of the Korteweg–de Vries equation. These solutions are in terms of the Jacobi elliptic function cn, which is why they are called cnoidal waves. They are used to describe surface gravity waves of fairly long wavelength, as compared to the water depth.

References

See also