Romberg's method

Last updated

In numerical analysis, Romberg's method [1] is used to estimate the definite integral

Contents

by applying Richardson extrapolation [2] repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangular array. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points.

The integrand must have continuous derivatives, though fairly good results may be obtained if only a few derivatives exist. If it is possible to evaluate the integrand at unequally spaced points, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally more accurate.

The method is named after Werner Romberg, who published the method in 1955.

Method

Using , the method can be inductively defined by

where and . In big O notation, the error for R(n, m) is: [3]

The zeroeth extrapolation, R(n, 0), is equivalent to the trapezoidal rule with 2n + 1 points; the first extrapolation, R(n, 1), is equivalent to Simpson's rule with 2n + 1 points. The second extrapolation, R(n, 2), is equivalent to Boole's rule with 2n + 1 points. The further extrapolations differ from Newton-Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton-Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton-Cotes methods fail to converge for many integrals, while Romberg integration is more stable.

By labelling our approximations as instead of , we can perform Richardson extrapolation with the error formula defined below:

Once we have obtained our approximations , we can label them as .

When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by Bulirsch & Stoer (1967).

A geometric example

To estimate the area under a curve the trapezoid rule is applied first to one-piece, then two, then four, and so on.

One-piece. Note since it starts and ends at zero, this approximation yields zero area. One-piece Trapezoid Approximation.svg
One-piece. Note since it starts and ends at zero, this approximation yields zero area.
Two-piece Two-piece Trapezoid Approximation.svg
Two-piece
Four-piece Four-piece Trapezoid Approximation.svg
Four-piece
Eight-piece Eight-piece Trapezoid Approximation.svg
Eight-piece

After trapezoid rule estimates are obtained, Richardson extrapolation is applied.

Number of piecesTrapezoid estimatesFirst iterationSecond iterationThird iteration
(4 MA  LA)/3*(16 MA  LA)/15(64 MA  LA)/63
10(4×16  0)/3 = 21.333...(16×34.667  21.333)/15 = 35.556...(64×42.489  35.556)/63 = 42.599...
216(4×30  16)/3 = 34.666...(16×42  34.667)/15 = 42.489...
430(4×39  30)/3 = 42
839

Example

As an example, the Gaussian function is integrated from 0 to 1, i.e. the error function erf(1)  0.842700792949715. The triangular array is calculated row by row and calculation is terminated if the two last entries in the last row differ less than 108.

0.77174333 0.82526296  0.84310283 0.83836778  0.84273605  0.84271160 0.84161922  0.84270304  0.84270083  0.84270066 0.84243051  0.84270093  0.84270079  0.84270079  0.84270079

The result in the lower right corner of the triangular array is accurate to the digits shown. It is remarkable that this result is derived from the less accurate approximations obtained by the trapezium rule in the first column of the triangular array.

Implementation

Here is an example of a computer implementation of the Romberg method (in the C programming language):

#include<stdio.h>#include<math.h>voidprint_row(size_ti,double*R){printf("R[%2zu] = ",i);for(size_tj=0;j<=i;++j){printf("%f ",R[j]);}printf("\n");}/*INPUT:(*f) : pointer to the function to be integrateda    : lower limitb    : upper limitmax_steps: maximum steps of the procedureacc  : desired accuracyOUTPUT:Rp[max_steps-1]: approximate value of the integral of the function f for x in [a,b] with accuracy 'acc' and steps 'max_steps'.*/doubleromberg(double(*f)(double),doublea,doubleb,size_tmax_steps,doubleacc){doubleR1[max_steps],R2[max_steps];// buffersdouble*Rp=&R1[0],*Rc=&R2[0];// Rp is previous row, Rc is current rowdoubleh=b-a;//step sizeRp[0]=(f(a)+f(b))*h*0.5;// first trapezoidal stepprint_row(0,Rp);for(size_ti=1;i<max_steps;++i){h/=2.;doublec=0;size_tep=1<<(i-1);//2^(n-1)for(size_tj=1;j<=ep;++j){c+=f(a+(2*j-1)*h);}Rc[0]=h*c+.5*Rp[0];// R(i,0)for(size_tj=1;j<=i;++j){doublen_k=pow(4,j);Rc[j]=(n_k*Rc[j-1]-Rp[j-1])/(n_k-1);// compute R(i,j)}// Print ith row of R, R[i,i] is the best estimate so farprint_row(i,Rc);if(i>1&&fabs(Rp[i-1]-Rc[i])<acc){returnRc[i];}// swap Rn and Rc as we only need the last rowdouble*rt=Rp;Rp=Rc;Rc=rt;}returnRp[max_steps-1];// return our best guess}

Related Research Articles

In mathematics, the Bernoulli numbersBn are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.

<span class="mw-page-title-main">Taylor series</span> Mathematical approximation of a function

In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.

<span class="mw-page-title-main">Gaussian quadrature</span> Approximation of the definite integral of a function

In numerical analysis, an n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the nodes xi and weights wi for i = 1, ..., n.

<span class="mw-page-title-main">Taylor's theorem</span> Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

<span class="mw-page-title-main">Numerical integration</span> Methods of calculating definite integrals

In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.

<span class="mw-page-title-main">Lagrange polynomial</span> Polynomials used for interpolation

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

<span class="mw-page-title-main">Spearman's rank correlation coefficient</span> Nonparametric measure of rank correlation

In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.

In finance, the rule of 72, the rule of 70 and the rule of 69.3 are methods for estimating an investment's doubling time. The rule number is divided by the interest percentage per period to obtain the approximate number of periods required for doubling. Although scientific calculators and spreadsheet programs have functions to find the accurate doubling time, the rules are useful for mental calculations and when only a basic calculator is available.

<span class="mw-page-title-main">Richardson extrapolation</span> Sequence acceleration method in numerical analysis

In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value . In essence, given the value of for several values of , we can estimate by extrapolating the estimates to . It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of . In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated."

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.

<span class="mw-page-title-main">Rectangular function</span> Function whose graph is 0, then 1, then 0 again, in an almost-everywhere continuous way

The rectangular function is defined as

<span class="mw-page-title-main">Euler method</span> Approach to finding numerical solutions of ordinary differential equations

In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis.

Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print, although more modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a cubic function.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient, is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though Gustav Fechner had proposed a similar measure in the context of time series in 1897.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

<span class="mw-page-title-main">Dirichlet kernel</span> Concept in mathematical analysis

In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as

<span class="mw-page-title-main">Ramanujan's master theorem</span> Mathematical theorem

In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.

The spectral correlation density (SCD), sometimes also called the cyclic spectral density or spectral correlation function, is a function that describes the cross-spectral density of all pairs of frequency-shifted versions of a time-series. The spectral correlation density applies only to cyclostationary processes because stationary processes do not exhibit spectral correlation. Spectral correlation has been used both in signal detection and signal classification. The spectral correlation density is closely related to each of the bilinear time-frequency distributions, but is not considered one of Cohen's class of distributions.

References

Citations

Bibliography

  • Richardson, L. F. (1911), "The Approximate Arithmetical Solution by Finite Differences of Physical Problems Involving Differential Equations, with an Application to the Stresses in a Masonry Dam", Philosophical Transactions of the Royal Society A, 210 (459–470): 307–357, doi:10.1098/rsta.1911.0009, JSTOR   90994
  • Romberg, W. (1955), "Vereinfachte numerische Integration", Det Kongelige Norske Videnskabers Selskab Forhandlinger, 28 (7), Trondheim: 30–36
  • Thacher Jr., Henry C. (July 1964), "Remark on Algorithm 60: Romberg integration", Communications of the ACM, 7 (7): 420–421, doi: 10.1145/364520.364542
  • Bauer, F.L.; Rutishauser, H.; Stiefel, E. (1963), Metropolis, N. C.; et al. (eds.), "New aspects in numerical quadrature", Experimental Arithmetic, high-speed computing and mathematics, Proceedings of Symposia in Applied Mathematics (15), AMS: 199–218
  • Bulirsch, Roland; Stoer, Josef (1967), "Handbook Series Numerical Integration. Numerical quadrature by extrapolation", Numerische Mathematik, 9: 271–278, doi:10.1007/bf02162420
  • Mysovskikh, I.P. (2002) [1994], "Romberg method", in Hazewinkel, Michiel (ed.), Encyclopedia of Mathematics , Springer-Verlag, ISBN   1-4020-0609-8
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 4.3. Romberg Integration", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN   978-0-521-88068-8