Ridders' method

Last updated

In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function . The method is due to C. Ridders. [1] [2]

Ridders' method is simpler than Muller's method or Brent's method but with similar performance. [3] The formula below converges quadratically when the function is well-behaved, which implies that the number of additional significant digits found at each step approximately doubles; but the function has to be evaluated twice for each step, so the overall order of convergence of the method is . If the function is not well-behaved, the root remains bracketed and the length of the bracketing interval at least halves on each iteration, so convergence is guaranteed.

Method

Given two values of the independent variable, and , which are on two different sides of the root being sought, i.e.,, the method begins by evaluating the function at the midpoint . One then finds the unique exponential function such that function satisfies . Specifically, parameter is determined by

The false position method is then applied to the points and , leading to a new value between and ,

which will be used as one of the two bracketing values in the next step of the iteration.

The other bracketing value is taken to be if (well-behaved case), or otherwise whichever of and has function value of opposite sign to . The procedure can be terminated when a given accuracy is obtained.


Related Research Articles

Numerical analysis Field of mathematics

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. The growth in computing power has revolutionized the use of realistic mathematical models in science and engineering, and subtle numerical analysis is required to implement these detailed models of the world. For example, ordinary differential equations appear in celestial mechanics ; numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f ′, and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then

In algebra, a quadratic equation is any equation that can be rearranged in standard form as

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeroes, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeroes of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeroes, expressed either as floating point numbers or as small isolating intervals, or disks for complex roots.

Error function Sigmoid shape special function

In mathematics, the error function, often denoted by erf, is a complex function of a complex variable defined as:

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

Tetration Repeated or iterated exponentiation

In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. It is the next hyperoperation after exponentiation, but before pentation. The word was coined by Reuben Louis Goodstein from tetra- (four) and iteration.

Secant method Root-finding method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.

Bisection method Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.

In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation p(x) = 0 for a given polynomial p(x). One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to some root of the polynomial, no matter what initial guess is chosen. However, for computer computation, more efficient methods are known, with which it is guaranteed to find all roots or all real roots.

In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker. Consequently, the method is also known as the Brent–Dekker method.

Golden-section search

The golden-section search is a technique for finding an extremum of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema, it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio 2-φ:2φ-3:2-φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953).

Methods of computing square roots are numerical analysis algorithms for finding the principal, or non-negative, square root of a real number. Arithmetically, it means given S, a procedure for finding a number which when multiplied by itself, yields S; algebraically, it means a procedure for finding the non-negative root of the equation x2 - S = 0; geometrically, it means given the area of a square, a procedure for constructing a side of the square.

Newton fractal

The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's method. When there are no attractive cycles, it divides the complex plane into regions , each of which is associated with a root of the polynomial, . In this way the Newton fractal is similar to the Mandelbrot set, and like other fractals it exhibits an intricate appearance arising from a simple description. It is relevant to numerical analysis because it shows that the Newton method can be very sensitive to its choice of start point.

In numerical analysis, Aitken's delta-squared process or Aitken Extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies, since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function.

References

  1. Ridders, C. (1979). "A new algorithm for computing a single root of a real continuous function". IEEE Transactions on Circuits and Systems. 26: 979–980. doi:10.1109/TCS.1979.1084580.
  2. Kiusalaas, Jaan (2010). Numerical Methods in Engineering with Python (2nd ed.). Cambridge University Press. pp. 146–150. ISBN   978-0-521-19132-6.
  3. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 9.2.1. Ridders' Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN   978-0-521-88068-8.