Multiple-scale analysis

Last updated

In mathematics and physics, multiple-scale analysis (also called the method of multiple scales) comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions.

Contents

Mathematics research from about the 1980s proposes that coordinate transforms and invariant manifolds provide a sounder support for multiscale modelling (for example, see center manifold and slow manifold).

Example: undamped Duffing equation

Here the differences between
O
(
e
)
{\textstyle {\mathcal {O}}(\varepsilon )}
approaches for both regular perturbation theory and multiple-scale analysis can be seen, and how they compare to the exact solution for
e
=
1
4
{\textstyle \varepsilon ={\frac {1}{4}}} Duffing-equation-perturbative-approaches.svg
Here the differences between approaches for both regular perturbation theory and multiple-scale analysis can be seen, and how they compare to the exact solution for

Differential equation and energy conservation

As an example for the method of multiple-scale analysis, consider the undamped and unforced Duffing equation: [1]

which is a second-order ordinary differential equation describing a nonlinear oscillator. A solution y(t) is sought for small values of the (positive) nonlinearity parameter 0 < ε  1. The undamped Duffing equation is known to be a Hamiltonian system:

with q = y(t) and p = dy/dt. Consequently, the Hamiltonian H(p, q) is a conserved quantity, a constant, equal to H = ½ + ¼ ε for the given initial conditions. This implies that both y and dy/dt have to be bounded:

Straightforward perturbation-series solution

A regular perturbation-series approach to the problem proceeds by writing and substituting this into the undamped Duffing equation. Matching powers of gives the system of equations

Solving these subject to the initial conditions yields

Note that the last term between the square braces is secular: it grows without bound for large |t|. In particular, for this term is O(1) and has the same order of magnitude as the leading-order term. Because the terms have become disordered, the series is no longer an asymptotic expansion of the solution.

Method of multiple scales

To construct a solution that is valid beyond , the method of multiple-scale analysis is used. Introduce the slow scale t1:

and assume the solution y(t) is a perturbation-series solution dependent both on t and t1, treated as:

So:

using dt1/dt = ε. Similarly:

Then the zeroth- and first-order problems of the multiple-scales perturbation series for the Duffing equation become:

Solution

The zeroth-order problem has the general solution:

with A(t1) a complex-valued amplitude to the zeroth-order solution Y0(t, t1) and i2 = −1. Now, in the first-order problem the forcing in the right hand side of the differential equation is

where c.c. denotes the complex conjugate of the preceding terms. The occurrence of secular terms can be prevented by imposing on the – yet unknown – amplitude A(t1) the solvability condition

The solution to the solvability condition, also satisfying the initial conditions y(0) = 1 and dy/dt(0) = 0, is:

As a result, the approximate solution by the multiple-scales analysis is

using t1 = εt and valid for εt = O(1). This agrees with the nonlinear frequency changes found by employing the Lindstedt–Poincaré method.

This new solution is valid until . Higher-order solutions – using the method of multiple scales – require the introduction of additional slow scales, i.e., t2 = ε2t, t3 = ε3t, etc. However, this introduces possible ambiguities in the perturbation series solution, which require a careful treatment (see Kevorkian & Cole 1996; Bender & Orszag 1999). [2]

Coordinate transform to amplitude/phase variables

Alternatively, modern approaches derive these sorts of models using coordinate transforms, like in the method of normal forms, [3] as described next.

A solution is sought in new coordinates where the amplitude varies slowly and the phase varies at an almost constant rate, namely Straightforward algebra finds the coordinate transform[ citation needed ]

transforms Duffing's equation into the pair that the radius is constant and the phase evolves according to

That is, Duffing's oscillations are of constant amplitude but have different frequencies depending upon the amplitude. [4]

More difficult examples are better treated using a time-dependent coordinate transform involving complex exponentials (as also invoked in the previous multiple time-scale approach). A web service will perform the analysis for a wide range of examples. [5]

See also

Notes

  1. This example is treated in: Bender & Orszag (1999) pp. 545–551.
  2. Bender & Orszag (1999) p. 551.
  3. Lamarque, C.-H.; Touze, C.; Thomas, O. (2012), "An upper bound for validity limits of asymptotic analytical approaches based on normal form theory" (PDF), Nonlinear Dynamics , 70 (3): 1931–1949, doi:10.1007/s11071-012-0584-y, hdl:10985/7473, S2CID   254862552
  4. Roberts, A.J., Modelling emergent dynamics in complex systems , retrieved 2013-10-03
  5. Roberts, A.J., Construct centre manifolds of ordinary or delay differential equations (autonomous) , retrieved 2013-10-03

Related Research Articles

In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler. Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.

<span class="mw-page-title-main">Kepler's laws of planetary motion</span> Laws describing the motion of planets

In astronomy, Kepler's laws of planetary motion, published by Johannes Kepler between 1609 and 1619, describe the orbits of planets around the Sun. The laws modified the heliocentric theory of Nicolaus Copernicus, replacing its circular orbits and epicycles with elliptical trajectories, and explaining how planetary velocities vary. The three laws state that:

  1. The orbit of a planet is an ellipse with the Sun at one of the two foci.
  2. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.
  3. The square of a planet's orbital period is proportional to the cube of the length of the semi-major axis of its orbit.
<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

In classical mechanics, the Laplace–Runge–Lenz (LRL) vector is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems.

<span class="mw-page-title-main">Envelope (mathematics)</span> Family of curves in geometry

In geometry, an envelope of a planar family of curves is a curve that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. Classically, a point on the envelope can be thought of as the intersection of two "infinitesimally adjacent" curves, meaning the limit of intersections of nearby curves. This idea can be generalized to an envelope of surfaces in space, and so on to higher dimensions.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

<span class="mw-page-title-main">Hopf bifurcation</span> Critical point where a periodic solution arises

In the mathematical theory of bifurcations, a Hopfbifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes.

In mathematics, more specifically in dynamical systems, the method of averaging exploits systems containing time-scales separation: a fast oscillationversus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.

<span class="mw-page-title-main">Hamilton's principle</span> Formulation of the principle of stationary action

In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.

<span class="mw-page-title-main">Pendulum (mechanics)</span> Free swinging suspended body

A pendulum is a body suspended from a fixed support so that it swings freely back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations.

In perturbation theory, the Poincaré–Lindstedt method or Lindstedt–Poincaré method is a technique for uniformly approximating periodic solutions to ordinary differential equations, when regular perturbation approaches fail. The method removes secular terms—terms growing without bound—arising in the straightforward application of perturbation theory to weakly nonlinear problems with finite oscillatory solutions.

<span class="mw-page-title-main">Stokes wave</span> Nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth

In fluid dynamics, a Stokes wave is a nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth. This type of modelling has its origins in the mid 19th century when Sir George Stokes – using a perturbation series approach, now known as the Stokes expansion – obtained approximate solutions for nonlinear wave motion.

The Krylov–Bogolyubov averaging method is a mathematical method for approximate analysis of oscillating processes in non-linear mechanics. The method is based on the averaging principle when the exact differential equation of the motion is replaced by its averaged version. The method is named after Nikolay Krylov and Nikolay Bogoliubov.

<span class="mw-page-title-main">Potential flow around a circular cylinder</span> Classical solution for inviscid, incompressible flow around a cyclinder

In mathematics, potential flow around a circular cylinder is a classical solution for the flow of an inviscid, incompressible fluid around a cylinder that is transverse to the flow. Far from the cylinder, the flow is unidirectional and uniform. The flow has no vorticity and thus the velocity field is irrotational and can be modeled as a potential flow. Unlike a real fluid, this solution indicates a net zero drag on the body, a result known as d'Alembert's paradox.

The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.

In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics.

References