Exponential integrator

Last updated

Exponential integrators are a class of numerical methods for the solution of ordinary differential equations, specifically initial value problems. This large class of methods from numerical analysis is based on the exact integration of the linear part of the initial value problem. Because the linear part is integrated exactly, this can help to mitigate the stiffness of a differential equation. Exponential integrators can be constructed to be explicit or implicit for numerical ordinary differential equations or serve as the time integrator for numerical partial differential equations.

Contents

Background

Dating back to at least the 1960s, these methods were recognized by Certaine [1] and Pope. [2] As of late exponential integrators have become an active area of research, see Hochbruck and Ostermann (2010). [3] Originally developed for solving stiff differential equations, the methods have been used to solve partial differential equations including hyperbolic as well as parabolic problems [4] such as the heat equation.

Introduction

We consider initial value problems of the form,

where is composed of linear terms, and is composed of the non-linear terms. These problems can come from a more typical initial value problem

after linearizing locally about a fixed or local state :

Here, refers to the partial derivative of with respect to (the Jacobian of f).

Exact integration of this problem from time 0 to a later time can be performed using matrix exponentials to define an integral equation for the exact solution: [3]

This is similar to the exact integral used in the Picard–Lindelöf theorem. In the case of , this formulation is the exact solution to the linear differential equation.

Numerical methods require a discretization of equation (2). They can be based on Runge-Kutta discretizations, [5] [6] [7] linear multistep methods or a variety of other options.

Exponential Rosenbrock methods

Exponential Rosenbrock methods were shown to be very efficient in solving large systems of stiff ordinary differential equations, usually resulted from spatial discretization of time dependent (parabolic) PDEs. These integrators are constructed based on a continuous linearization of (1) along the numerical solution

where This procedure enjoys the advantage, in each step, that This considerably simplifies the derivation of the order conditions and improves the stability when integrating the nonlinearity . Again, applying the variation-of-constants formula (2) gives the exact solution at time as

The idea now is to approximate the integral in (4) by some quadrature rule with nodes and weights (). This yields the following class of explicit exponential Rosenbrock methods, see Hochbruck and Ostermann (2006), Hochbruck, Ostermann and Schweitzer (2009):

with . The coefficients are usually chosen as linear combinations of the entire functions , respectively, where

These functions satisfy the recursion relation

By introducing the difference , they can be reformulated in a more efficient way for implementation (see also [3] ) as

In order to implement this scheme with adaptive step size, one can consider, for the purpose of local error estimation, the following embedded methods

which use the same stages but with weights .

For convenience, the coefficients of the explicit exponential Rosenbrock methods together with their embedded methods can be represented by using the so-called reduced Butcher tableau as follows:

Stiff order conditions

Moreover, it is shown in Luan and Ostermann (2014a) [8] that the reformulation approach offers a new and simple way to analyze the local errors and thus to derive the stiff order conditions for exponential Rosenbrock methods up to order 5. With the help of this new technique together with an extension of the B-series concept, a theory for deriving the stiff order conditions for exponential Rosenbrock integrators of arbitrary order has been finally given in Luan and Ostermann (2013). [9] As an example, in that work the stiff order conditions for exponential Rosenbrock methods up to order 6 have been derived, which are stated in the following table:

Here denote arbitrary square matrices.

Convergence analysis

The stability and convergence results for exponential Rosenbrock methods are proved in the framework of strongly continuous semigroups in some Banach space.

Examples

All the schemes presented below fulfill the stiff order conditions and thus are also suitable for solving stiff problems.

Second-order method

The simplest exponential Rosenbrock method is the exponential Rosenbrock–Euler scheme, which has order 2, see, for example Hochbruck et al. (2009):

Third-order methods

A class of third-order exponential Rosenbrock methods was derived in Hochbruck et al. (2009), named as exprb32, is given as:

exprb32:

1
0

which reads as

where

For a variable step size implementation of this scheme, one can embed it with the exponential Rosenbrock–Euler:

Fourth-order ETDRK4 method of Cox and Matthews

Cox and Matthews [5] describe a fourth-order method exponential time differencing (ETD) method that they used Maple to derive.

We use their notation, and assume that the unknown function is , and that we have a known solution at time . Furthermore, we'll make explicit use of a possibly time dependent right hand side: .

Three stage values are first constructed:

The final update is given by,

If implemented naively, the above algorithm suffers from numerical instabilities due to floating point round-off errors. [10] To see why, consider the first function,

which is present in the first-order Euler method, as well as all three stages of ETDRK4. For small values of , this function suffers from numerical cancellation errors. However, these numerical issues can be avoided by evaluating the function via a contour integral approach [10] or by a Padé approximant. [11]

Applications

Exponential integrators are used for the simulation of stiff scenarios in scientific and visual computing, for example in molecular dynamics, [12] for VLSI circuit simulation, [13] [14] and in computer graphics. [15] They are also applied in the context of hybrid monte carlo methods. [16] In these applications, exponential integrators show the advantage of large time stepping capability and high accuracy. To accelerate the evaluation of matrix functions in such complex scenarios, exponential integrators are often combined with Krylov subspace projection methods.

See also

Notes

  1. Certaine (1960)
  2. Pope (1963)
  3. 1 2 3 Hochbruck & Ostermann (2010)
  4. Hochbruck & Ostermann (2005a), Hochbruck & Ostermann (2005b)
  5. 1 2 Cox & Matthews (2002)
  6. Tokman (2006)
  7. Tokman (2011)
  8. Luan & Ostermann (2014a)
  9. Luan & Ostermann (2013)
  10. 1 2 Kassam & Trefethen (2005)
  11. Berland, Skaflestad & Wright (2007)
  12. Michels & Desbrun (2015)
  13. Zhuang et al. (2014)
  14. Weng, Chen & Cheng (2012)
  15. Michels, Sobottka & Weber (2014)
  16. Chao et al. (2015)

Related Research Articles

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

<span class="mw-page-title-main">Foliation</span> In mathematics, a type of equivalence relation on an n-manifold

In mathematics, a foliation is an equivalence relation on an n-manifold, the equivalence classes being connected, injectively immersed submanifolds, all of the same dimension p, modeled on the decomposition of the real coordinate space Rn into the cosets x + Rp of the standardly embedded subspace Rp. The equivalence classes are called the leaves of the foliation. If the manifold and/or the submanifolds are required to have a piecewise-linear, differentiable, or analytic structure then one defines piecewise-linear, differentiable, or analytic foliations, respectively. In the most important case of differentiable foliation of class Cr it is usually understood that r ≥ 1. The number p is called the dimension of the foliation and q = np is called its codimension.

<span class="mw-page-title-main">Path integral formulation</span> Formulation of quantum mechanics

The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function.

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

In operator theory, a bounded operator T: XY between normed vector spaces X and Y is said to be a contraction if its operator norm ||T || ≤ 1. This notion is a special case of the concept of a contraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions on Hilbert space is largely due to Béla Szőkefalvi-Nagy and Ciprian Foias.

In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics.

In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.

In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by Hairer & Wanner (1974), is an infinite-dimensional Lie group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method. It arose from an algebraic formalism involving rooted trees that provides formal power series solutions of the differential equation modeling the flow of a vector field. It was Cayley (1857), prompted by the work of Sylvester on change of variables in differential calculus, who first noted that the derivatives of a composition of functions can be conveniently expressed in terms of rooted trees and their combinatorics.

In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

In physics, Liouville field theory is a two-dimensional conformal field theory whose classical equation of motion is a generalization of Liouville's equation.

<span class="mw-page-title-main">Uflyand-Mindlin plate theory</span>

The Uflyand-Mindlin theory of vibrating plates is an extension of Kirchhoff–Love plate theory that takes into account shear deformations through-the-thickness of a plate. The theory was proposed in 1948 by Yakov Solomonovich Uflyand (1916-1991) and in 1951 by Raymond Mindlin with Mindlin making reference to Uflyand's work. Hence, this theory has to be referred to as Uflyand-Mindlin plate theory, as is done in the handbook by Elishakoff, and in papers by Andronov, Elishakoff, Hache and Challamel, Loktev, Rossikhin and Shitikova and Wojnar. In 1994, Elishakoff suggested to neglect the fourth-order time derivative in Uflyand-Mindlin equations. A similar, but not identical, theory in static setting, had been proposed earlier by Eric Reissner in 1945. Both theories are intended for thick plates in which the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. The Uflyand-Mindlin theory is used to calculate the deformations and stresses in a plate whose thickness is of the order of one tenth the planar dimensions while the Kirchhoff–Love theory is applicable to thinner plates.

Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.

Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References