Truncation error (numerical integration)

Last updated

Truncation errors in numerical integration are of two kinds:

Contents

Definitions

Suppose we have a continuous differential equation

and we wish to compute an approximation of the true solution at discrete time steps . For simplicity, assume the time steps are equally spaced:

Suppose we compute the sequence with a one-step method of the form

The function is called the increment function, and can be interpreted as an estimate of the slope .

Local truncation error

The local truncation error is the error that our increment function, , causes during a single iteration, assuming perfect knowledge of the true solution at the previous iteration.

More formally, the local truncation error, , at step is computed from the difference between the left- and the right-hand side of the equation for the increment :

[1] [2]

The numerical method is consistent if the local truncation error is (this means that for every there exists an such that for all ; see little-o notation). If the increment function is continuous, then the method is consistent if, and only if, . [3]

Furthermore, we say that the numerical method has order if for any sufficiently smooth solution of the initial value problem, the local truncation error is (meaning that there exist constants and such that for all ). [4]

Global truncation error

The global truncation error is the accumulation of the local truncation error over all of the iterations, assuming perfect knowledge of the true solution at the initial time step.[ citation needed ]

More formally, the global truncation error, , at time is defined by:

[5]

The numerical method is convergent if global truncation error goes to zero as the step size goes to zero; in other words, the numerical solution converges to the exact solution: . [6]

Relationship between local and global truncation errors

Sometimes it is possible to calculate an upper bound on the global truncation error, if we already know the local truncation error. This requires our increment function be sufficiently well-behaved.

The global truncation error satisfies the recurrence relation:

This follows immediately from the definitions. Now assume that the increment function is Lipschitz continuous in the second argument, that is, there exists a constant such that for all and and , we have:

Then the global error satisfies the bound

[7]

It follows from the above bound for the global error that if the function in the differential equation is continuous in the first argument and Lipschitz continuous in the second argument (the condition from the Picard–Lindelöf theorem), and the increment function is continuous in all arguments and Lipschitz continuous in the second argument, then the global error tends to zero as the step size approaches zero (in other words, the numerical method converges to the exact solution). [8]

Extension to linear multistep methods

Now consider a linear multistep method, given by the formula

Thus, the next value for the numerical solution is computed according to

The next iterate of a linear multistep method depends on the previous s iterates. Thus, in the definition for the local truncation error, it is now assumed that the previous s iterates all correspond to the exact solution:

[9]

Again, the method is consistent if and it has order p if . The definition of the global truncation error is also unchanged.

The relation between local and global truncation errors is slightly different from in the simpler setting of one-step methods. For linear multistep methods, an additional concept called zero-stability is needed to explain the relation between local and global truncation errors. Linear multistep methods that satisfy the condition of zero-stability have the same relation between local and global errors as one-step methods. In other words, if a linear multistep method is zero-stable and consistent, then it converges. And if a linear multistep method is zero-stable and has local error , then its global error satisfies . [10]

See also

Notes

  1. Gupta, G. K.; Sacks-Davis, R.; Tischer, P. E. (March 1985). "A review of recent developments in solving ODEs". Computing Surveys. 17 (1): 5–47. CiteSeerX   10.1.1.85.783 . doi:10.1145/4078.4079.
  2. Süli & Mayers 2003 , p. 317, calls the truncation error.
  3. Süli & Mayers 2003 , pp. 321 & 322
  4. Iserles 1996 , p. 8; Süli & Mayers 2003 , p. 323
  5. Süli & Mayers 2003 , p. 317
  6. Iserles 1996 , p. 5
  7. Süli & Mayers 2003 , p. 318
  8. Süli & Mayers 2003 , p. 322
  9. Süli & Mayers 2003 , p. 337, uses a different definition, dividing this by essentially by h
  10. Süli & Mayers 2003 , p. 340

Related Research Articles

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f, and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then

Numerical integration Family of algorithms for finding the definite integral of a function

In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of definite integrals.

Runge–Kutta methods Family of implicit and explicit iterative methods

In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods, which include the well-known routine called the Euler method, used in temporal discretization for the approximate solutions of simultaneous non linear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.

Numerical methods for ordinary differential equations Methods used to find numerical solutions of ordinary differential equations

Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals.

Discretization Process of transferring continuous functions into discrete counterparts

In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

Richardson extrapolation

In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value . In essence, given the value of for several values of , we can estimate by extrapolating the estimates to . It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of π. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated."

Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

Euler method Approach to finding numerical solutions of ordinary differential equations

In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis.

In numerical analysis and scientific computing, the backward Euler method is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but differs in that it is an implicit method. The backward Euler method has error of order one in time.

The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of this system depends strictly on the input at that particular time. In the Volterra series the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.

Finite difference method

In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval are discretized, or broken into a finite number of steps, and the value of the solution at these discrete points is approximated by solving algebraic equations containing finite differences and values from nearby points.

In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs.

In mathematics, the Runge–Kutta–Fehlberg method is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods.

In mathematics numerical analysis, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations in order to control the errors of the method and to ensure stability properties such as A-stability. Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative. For example, when modeling the motion of a satellite about the earth as a standard Kepler orbit, a fixed time-stepping method such as the Euler method may be sufficient. However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the Three-body problem. There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if the spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed. Romberg's method and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize.

The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that function using information from already computed time points, thereby increasing the accuracy of the approximation. These methods are especially used for the solution of stiff differential equations. The methods were first introduced by Charles F. Curtiss and Joseph O. Hirschfelder in 1952.

Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute value (LAV), least absolute residual (LAR), sum of absolute deviations, or the L1 norm condition, is a statistical optimality criterion and the statistical optimization technique that relies on it. Similar to the least squares technique, it attempts to find a function which closely approximates a set of data. In the simple case of a set of (x,y) data, the approximation function is a simple "trend line" in two-dimensional Cartesian coordinates. The method minimizes the sum of absolute errors (SAE). The least absolute deviations estimate also arises as the maximum likelihood estimate if the errors have a Laplace distribution. It was introduced in 1757 by Roger Joseph Boscovich.

In numerical analysis and scientific computing, the trapezoidal rule is a numerical method to solve ordinary differential equations derived from the trapezoidal rule for computing integrals. The trapezoidal rule is an implicit second-order method, which can be considered as both a Runge–Kutta method and a linear multistep method.

Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems. It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integration methods.

References