Brent's method

Last updated

In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent [1] and builds on an earlier algorithm by Theodorus Dekker. [2] Consequently, the method is also known as the Brent–Dekker method.

Contents

Modern improvements on Brent's method include Chandrupatla's method, which is simpler and faster for functions that are flat around their roots; [3] [4] Ridders' method, which performs exponential interpolations instead of quadratic providing a simpler closed formula for the iterations; and the ITP method which is a hybrid between regula-falsi and bisection that achieves optimal worst-case and asymptotic guarantees.

Dekker's method

The idea to combine the bisection method with the secant method goes back to Dekker (1969).

Suppose that we want to solve the equation f(x) = 0. As with the bisection method, we need to initialize Dekker's method with two points, say a0 and b0, such that f(a0) and f(b0) have opposite signs. If f is continuous on [a0, b0], the intermediate value theorem guarantees the existence of a solution between a0 and b0.

Three points are involved in every iteration:

Two provisional values for the next iterate are computed. The first one is given by linear interpolation, also known as the secant method:

and the second one is given by the bisection method

If the result of the secant method, s, lies strictly between bk and m, then it becomes the next iterate (bk+1 = s), otherwise the midpoint is used (bk+1 = m).

Then, the value of the new contrapoint is chosen such that f(ak+1) and f(bk+1) have opposite signs. If f(ak) and f(bk+1) have opposite signs, then the contrapoint remains the same: ak+1 = ak. Otherwise, f(bk+1) and f(bk) have opposite signs, so the new contrapoint becomes ak+1 = bk.

Finally, if |f(ak+1)| < |f(bk+1)|, then ak+1 is probably a better guess for the solution than bk+1, and hence the values of ak+1 and bk+1 are exchanged.

This ends the description of a single iteration of Dekker's method.

Dekker's method performs well if the function f is reasonably well-behaved. However, there are circumstances in which every iteration employs the secant method, but the iterates bk converge very slowly (in particular, |bkbk1| may be arbitrarily small). Dekker's method requires far more iterations than the bisection method in this case.

Brent's method

Brent (1973) proposed a small modification to avoid the problem with Dekker's method. He inserts an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. Two inequalities must be simultaneously satisfied:

Given a specific numerical tolerance , if the previous step used the bisection method, the inequality must hold to perform interpolation, otherwise the bisection method is performed and its result used for the next iteration.

If the previous step performed interpolation, then the inequality is used instead to perform the next action (to choose) interpolation (when inequality is true) or bisection method (when inequality is not true).

Also, if the previous step used the bisection method, the inequality must hold, otherwise the bisection method is performed and its result used for the next iteration. If the previous step performed interpolation, then the inequality is used instead.

This modification ensures that at the kth iteration, a bisection step will be performed in at most additional iterations, because the above conditions force consecutive interpolation step sizes to halve every two iterations, and after at most iterations, the step size will be smaller than , which invokes a bisection step. Brent proved that his method requires at most N2 iterations, where N denotes the number of iterations for the bisection method. If the function f is well-behaved, then Brent's method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge superlinearly.

Furthermore, Brent's method uses inverse quadratic interpolation instead of linear interpolation (as used by the secant method). If f(bk), f(ak) and f(bk1) are distinct, it slightly increases the efficiency. As a consequence, the condition for accepting s (the value proposed by either linear interpolation or inverse quadratic interpolation) has to be changed: s has to lie between (3ak + bk) / 4 and bk.

Algorithm

inputa, b, and (a pointer to) a function for f calculate f(a) calculate f(b) iff(a)f(b)  0 then      exit function because the root is not bracketed. end ifif |f(a)| < |f(b)| then     swap (a,b) end ifc := aset mflag repeat untilf(b or s) = 0 or |ba| is small enough (convergence)iff(a) ≠ f(c) andf(b) ≠ f(c) then(inverse quadratic interpolation)else(secant method)end ifif(condition 1)sis notbetween  and bor(condition 2) (mflag is set and|sb| ≥ |bc|/2)or(condition 3) (mflag is cleared and|sb| ≥ |cd|/2)or(condition 4) (mflag is set and|bc| < |δ|)or(condition 5) (mflag is cleared and|cd| < |δ|)then(bisection method)set mflag     elseclear mflag     end if     calculate f(s)     d := c(d is assigned for the first time here; it won't be used above on the first iteration because mflag is set)c := biff(a)f(s) < 0 thenb := selsea := send ifif |f(a)| < |f(b)| then         swap (a,b)      end ifend repeatoutputbor s (return the root)

Example

Suppose that we are seeking a zero of the function defined by f(x) = (x + 3)(x 1)2.

We take [a0, b0] = [4, 4/3] as our initial interval.

We have f(a0) = 25 and f(b0) = 0.48148 (all numbers in this section are rounded), so the conditions f(a0) f(b0) < 0 and |f(b0)| ≤ |f(a0)| are satisfied.

Graph of f(x) = (x + 3)(x - 1) Brent method example.svg
Graph of f(x) = (x + 3)(x 1)
  1. In the first iteration, we use linear interpolation between (b1, f(b1)) = (a0, f(a0)) = (4, 25) and (b0, f(b0)) = (1.33333, 0.48148), which yields s = 1.23256. This lies between (3a0 + b0) / 4 and b0, so this value is accepted. Furthermore, f(1.23256) = 0.22891, so we set a1 = a0 and b1 = s = 1.23256.
  2. In the second iteration, we use inverse quadratic interpolation between (a1, f(a1)) = (4, 25) and (b0, f(b0)) = (1.33333, 0.48148) and (b1, f(b1)) = (1.23256, 0.22891). This yields 1.14205, which lies between (3a1 + b1) / 4 and b1. Furthermore, the inequality |1.14205 b1| ≤ |b0b1| / 2 is satisfied, so this value is accepted. Furthermore, f(1.14205) = 0.083582, so we set a2 = a1 and b2 = 1.14205.
  3. In the third iteration, we use inverse quadratic interpolation between (a2, f(a2)) = (4, 25) and (b1, f(b1)) = (1.23256, 0.22891) and (b2, f(b2)) = (1.14205, 0.083582). This yields 1.09032, which lies between (3a2 + b2) / 4 and b2. But here Brent's additional condition kicks in: the inequality |1.09032 b2| ≤ |b1b0| / 2 is not satisfied, so this value is rejected. Instead, the midpoint m = 1.42897 of the interval [a2, b2] is computed. We have f(m) = 9.26891, so we set a3 = a2 and b3 = 1.42897.
  4. In the fourth iteration, we use inverse quadratic interpolation between (a3, f(a3)) = (4, 25) and (b2, f(b2)) = (1.14205, 0.083582) and (b3, f(b3)) = (1.42897, 9.26891). This yields 1.15448, which is not in the interval between (3a3 + b3) / 4 and b3). Hence, it is replaced by the midpoint m = 2.71449. We have f(m) = 3.93934, so we set a4 = a3 and b4 = 2.71449.
  5. In the fifth iteration, inverse quadratic interpolation yields 3.45500, which lies in the required interval. However, the previous iteration was a bisection step, so the inequality |3.45500 b4| ≤ |b4b3| / 2 need to be satisfied. This inequality is false, so we use the midpoint m = 3.35724. We have f(m) = 6.78239, so m becomes the new contrapoint (a5 = 3.35724) and the iterate remains the same (b5 = b4).
  6. In the sixth iteration, we cannot use inverse quadratic interpolation because b5 = b4. Hence, we use linear interpolation between (a5, f(a5)) = (3.35724, 6.78239) and (b5, f(b5)) = (2.71449, 3.93934). The result is s = 2.95064, which satisfies all the conditions. But since the iterate did not change in the previous step, we reject this result and fall back to bisection. We update s = -3.03587, and f(s) = -0.58418.
  7. In the seventh iteration, we can again use inverse quadratic interpolation. The result is s = 3.00219, which satisfies all the conditions. Now, f(s) = 0.03515, so we set a7 = b6 and b7 = 3.00219 (a7 and b7 are exchanged so that the condition |f(b7)| ≤ |f(a7)| is satisfied). (Correct : linear interpolation )
  8. In the eighth iteration, we cannot use inverse quadratic interpolation because a7 = b6. Linear interpolation yields s = 2.99994, which is accepted. (Correct : )
  9. In the following iterations, the root x = 3 is approached rapidly: b9 = 3 + 6·108 and b10 = 3 3·1015. (Correct : Iter 9 : f(s) = −1.4 × 10−7, Iter 10 : f(s) = 6.96 × 10−12)

Implementations

Related Research Articles

The Gauss–Legendre algorithm is an algorithm to compute the digits of π. It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of π. However, it has some drawbacks and therefore all record-breaking calculations for many years have used other methods, almost always the Chudnovsky algorithm. For details, see Chronology of computation of π.

<span class="mw-page-title-main">Modular arithmetic</span> Computation modulo a fixed integer

In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction, the iteration/recursion is terminated after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence, other than the first, must be positive. The integers are called the coefficients or terms of the continued fraction.

In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).

In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.

<span class="mw-page-title-main">Secant method</span> Root-finding method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.

<span class="mw-page-title-main">Bisection method</span> Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.

<span class="mw-page-title-main">Newton's method in optimization</span> Method for finding stationary points of a function

In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function F, which are solutions to the equation F (x) = 0. As such, Newton's method can be applied to the derivative f of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x) = 0), also known as the critical points of f. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function f.

In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolation to approximate the inverse of f. This algorithm is rarely used on its own, but it is important because it forms part of the popular Brent's method.

In optimization, line search is a basic iterative approach to find a local minimum of an objective function . It first finds a descent direction along which the objective function will be reduced, and then computes a step size that determines how far should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly.

Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable, but not necessarily convex.

Quasi-Newton methods are methods used to find either zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.

Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form . The method was published by Avram Sidi.

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies, since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

References

  1. Brent 1973
  2. Dekker 1969
  3. Chandrupatla, Tirupathi R. (1997). "A new hybrid quadratic/Bisection algorithm for finding the zero of a nonlinear function without using derivatives". Advances in Engineering Software. 28 (3): 145–149. doi:10.1016/S0965-9978(96)00051-8.
  4. "Ten Little Algorithms, Part 5: Quadratic Extremum Interpolation and Chandrupatla's Method - Jason Sachs".

Further reading