Perturbation theory

Last updated

In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. [1] [2] A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. [3] In regular perturbation theory, the solution is expressed as a power series in a small parameter . [1] [2] The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.

Contents

Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.

Description

Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ε), like the following:

In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction

Some authors use big O notation to indicate the order of the error in the approximate solution: [2]

If the power series in converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. [1] In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. [1] However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series . If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers or negative powers ) then the perturbation problem is called a singular perturbation problem. [1] Many special techniques in perturbation theory have been developed to analyze singular perturbation problems. [1] [2]

Prototypical example

The earliest use of what would now be called perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun. [4]

Perturbation methods start with a simplified form of the original problem, which is simple enough to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity.

Perturbative expansion

Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write for this collection of equations; that is, let the symbol stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D".

The process is generally mechanical, if laborious. One begins by writing the equations so that they split into two parts: some collection of equations which can be solved exactly, and some additional remaining part for some small The solution (to ) is known, and one seeks the general solution to

Next the approximation is inserted into . This results in an equation for which, in the general case, can be written in closed form as a sum over integrals over Thus, one has obtained the first-order correction and thus is a good approximation to It is a good approximation, precisely because the parts that were ignored were of size The process can then be repeated, to obtain corrections and so on.

In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache." [5] This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs in quantum mechanics for controlling the expansion are the Feynman diagrams, which allow quantum mechanical perturbation series to be represented by a sketch.

Examples

Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations" include algebraic equations, [6] differential equations [7] (e.g., the equations of motion [8] and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer, [9] and Hamiltonian operators in quantum mechanics.

Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., the trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization), and the ground state energy of a quantum mechanical problem.

Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom).

Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy.

For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams.

History

Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Kepler's orbital equations only solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notably Joseph-Louis Lagrange and Pierre-Simon Laplace, to extend and generalize the methods of perturbation theory.

These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule. [10] [11] Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom.

Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile).

In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions.

The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the small denominator problem or small divisor problem. In the 19th century Poincaré observed (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general form where and are some complicated expressions pertinent to the problem to be solved, and and are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is an asymptotic series : A useful approximation for a few terms, but at some point becomes less accurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other.

Beginnings in the study of planetary motion

Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun. [12]

Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory". [12]

Perturbation theory was investigated by the classical scholars – Laplace, Siméon Denis Poisson, Carl Friedrich Gauss – as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus. He sent the coordinates to J.G. Galle who successfully observed Neptune through his telescope – a triumph of perturbation theory. [12]

Perturbation orders

The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.

In chemistry

Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory [13] works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the HartreeFock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method.

Shell-crossing

A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity. [14] This limits the predictive power of physical simulations at small scales.

See also

Related Research Articles

<span class="mw-page-title-main">Hydrogen atom</span> Atom of the element hydrogen

A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral hydrogen atom contains a nucleus of a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the baryonic mass of the universe.

In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.

<span class="mw-page-title-main">Quantum chaos</span> Branch of physics seeking to explain chaotic dynamical systems in terms of quantum theory

Quantum chaos is a branch of physics focused on how chaotic classical dynamical systems can be described in terms of quantum theory. The primary question that quantum chaos seeks to answer is: "What is the relationship between quantum mechanics and classical chaos?" The correspondence principle states that classical mechanics is the classical limit of quantum mechanics, specifically in the limit as the ratio of the Planck constant to the action of the system tends to zero. If this is true, then there must be quantum mechanisms underlying classical chaos. If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, how can exponential sensitivity to initial conditions arise in classical chaos, which must be the correspondence principle limit of quantum mechanics?

<span class="mw-page-title-main">Path integral formulation</span> Formulation of quantum mechanics

The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

In mathematics, a singular perturbation problem is a problem containing a small parameter that cannot be approximated by setting the parameter value to zero. More precisely, the solution cannot be uniformly approximated by an asymptotic expansion

In quantum physics, Fermi's golden rule is a formula that describes the transition rate from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak perturbation. This transition rate is effectively independent of time and is proportional to the strength of the coupling between the initial and final states of the system as well as the density of states. It is also applicable when the final state is discrete, i.e. it is not part of a continuum, if there is some decoherence in the process, like relaxation or collision of the atoms, or like noise in the perturbation, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth.

<span class="mw-page-title-main">Perturbation (astronomy)</span> Classical approach to the many-body problem of astronomy

In astronomy, perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body. The other forces can include a third body, resistance, as from an atmosphere, and the off-center attraction of an oblate or otherwise misshapen body.

<span class="mw-page-title-main">Two-state quantum system</span> Simple quantum mechanical system

In quantum mechanics, a two-state system is a quantum system that can exist in any quantum superposition of two independent quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.

Møller–Plesset perturbation theory (MP) is one of several quantum chemistry post-Hartree–Fock ab initio methods in the field of computational chemistry. It improves on the Hartree–Fock method by adding electron correlation effects by means of Rayleigh–Schrödinger perturbation theory (RS-PT), usually to second (MP2), third (MP3) or fourth (MP4) order. Its main idea was published as early as 1934 by Christian Møller and Milton S. Plesset.

In quantum mechanics, and in particular in scattering theory, the Feshbach–Fano method, named after Herman Feshbach and Ugo Fano, separates (partitions) the resonant and the background components of the wave function and therefore of the associated quantities like cross sections or phase shift. This approach allows us to define rigorously the concept of resonance in quantum mechanics.

In solid-state physics, the nearly free electron model is a quantum mechanical model of physical properties of electrons that can move almost freely through the crystal lattice of a solid. The model is closely related to the more conceptual empty lattice approximation. The model enables understanding and calculation of the electronic band structures, especially of metals.

Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime.

In mathematics, more specifically in dynamical systems, the method of averaging exploits systems containing time-scales separation: a fast oscillationversus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.

In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.

<span class="mw-page-title-main">Hamilton's principle</span> Formulation of the principle of stationary action

In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.

In perturbation theory, the Poincaré–Lindstedt method or Lindstedt–Poincaré method is a technique for uniformly approximating periodic solutions to ordinary differential equations, when regular perturbation approaches fail. The method removes secular terms—terms growing without bound—arising in the straightforward application of perturbation theory to weakly nonlinear problems with finite oscillatory solutions.

<span class="mw-page-title-main">Stokes wave</span> Nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth

In fluid dynamics, a Stokes wave is a nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth. This type of modelling has its origins in the mid 19th century when Sir George Stokes – using a perturbation series approach, now known as the Stokes expansion – obtained approximate solutions for nonlinear wave motion.

The Krylov–Bogolyubov averaging method is a mathematical method for approximate analysis of oscillating processes in non-linear mechanics. The method is based on the averaging principle when the exact differential equation of the motion is replaced by its averaged version. The method is named after Nikolay Krylov and Nikolay Bogoliubov.

In mathematics and physics, multiple-scale analysis comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions.

Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation. Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators.

References

  1. 1 2 3 4 5 6 Bender, Carl M. (1999). Advanced mathematical methods for scientists and engineers I : asymptotic methods and perturbation theory. Steven A. Orszag. New York, NY: Springer. ISBN   978-1-4757-3069-2. OCLC   851704808.
  2. 1 2 3 4 Holmes, Mark H. (2013). Introduction to perturbation methods (2nd ed.). New York: Springer. ISBN   978-1-4614-5477-9. OCLC   821883201.
  3. William E. Wiesel (2010). Modern Astrodynamics. Ohio: Aphelion Press. p. 107. ISBN   978-145378-1470.
  4. Martin C. Gutzwiller, "Moon-Earth-Sun: The oldest three-body problem", Rev. Mod. Phys. 70, 589 – Published 1 April 1998
  5. Cropper, William H. (2004). Great Physicists: The life and times of leading physicists from Galileo to Hawking. Oxford University Press. p. 34. ISBN   978-0-19-517324-6.
  6. "L. A. Romero, "Perturbation theory for polynomials", Lecture Notes, University of New Mexico (2013)" (PDF). Archived from the original (PDF) on 2018-04-17. Retrieved 2017-04-30.
  7. Bhimsen K. Shivamoggi: Perturbation Methods for Differential Equations, Springer, ISBN 978-1-4612-0047-5 (2003)
  8. Sergei Winitzki, "Perturbation theory for anharmonic oscillations", Lecture notes, LMU (2006)
  9. Michael A. Box, "Radiative perturbation theory: a review", Environmental Modelling & Software 17 (2002) 95–106
  10. Bransden, B.H.; Joachain, C.J. (1999). Quantum Mechanics (2nd ed.). Prentice Hall. p. 443. ISBN   978-0-58235691-7.
  11. Dirac, P.A.M. (1 March 1927). "The quantum theory of emission and absorption of radiation". Proceedings of the Royal Society A . 114 (767): 243–265. Bibcode:1927RSPSA.114..243D. doi: 10.1098/rspa.1927.0039 . JSTOR   94746. See equations (24) and (32).
  12. 1 2 3 "Perturbation theory". Encyclopedia of Mathematics (encyclopediaofmath.org).{{cite web}}: Unknown parameter |people= ignored (help)
  13. King, Matcha (1976). "Theory of the Chemical Bond". Journal of the American Chemical Society. 98 (12): 3415–3420. doi:10.1021/ja00428a004.
  14. Rampf, Cornelius; Hahn, Oliver (2021-02-01). "Shell-crossing in a ΛCDM Universe". Monthly Notices of the Royal Astronomical Society. 501 (1): L71–L75. arXiv: 2010.12584 . Bibcode:2021MNRAS.501L..71R. doi: 10.1093/mnrasl/slaa198 . ISSN   0035-8711.