Secant method

Last updated

The first two iterations of the secant method. The red curve shows the function f, and the blue lines are the secants. For this particular case, the secant method will not converge to the visible root. Secant method.svg
The first two iterations of the secant method. The red curve shows the function f, and the blue lines are the secants. For this particular case, the secant method will not converge to the visible root.

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years. [1]

Contents

The method

For finding a zero of a function f, the secant method is defined by the recurrence relation.

As can be seen from this formula, two initial values x0 and x1 are required. Ideally, they should be chosen close to the desired zero.

Derivation of the method

Starting with initial values x0 and x1, we construct a line through the points (x0, f(x0)) and (x1, f(x1)), as shown in the picture above. In slope–intercept form, the equation of this line is

The root of this linear function, that is the value of x such that y = 0 is

We then use this new value of x as x2 and repeat the process, using x1 and x2 instead of x0 and x1. We continue this process, solving for x3, x4, etc., until we reach a sufficiently high level of precision (a sufficiently small difference between xn and xn−1):

Convergence

The iterates of the secant method converge to a root of if the initial values and are sufficiently close to the root. The order of convergence is , where

is the golden ratio. This convergence is superlinear but subquadratic.

This convergence order only holds under some technical conditions, namely that is twice continuously differentiable and the root in question is a simple root (i.e., it has multiplicity 1).

If the initial values are not close enough to the root, then there is no guarantee that the secant method converges at all. There is no general definition of "close enough", but the criterion for convergence has to do with how "wiggly" the function is on the interval . For example, if is differentiable on that interval and there is a point where on the interval, then the algorithm may not converge.

Comparison with other root-finding methods

The secant method does not require that the root remain bracketed, like the bisection method does, and hence it does not always converge. The false position method (or regula falsi) uses the same formula as the secant method. However, it does not apply the formula on and , like the secant method, but on and on the last iterate such that and have a different sign. This means that the false position method always converges; however, only with a linear order of convergence. Bracketing with a super-linear order of convergence as the secant method can be attained with improvements to the false position method (see Regula falsi § Improvements in regula falsi) such as the ITP method or Illinois method.

The recurrence formula of the secant method can be derived from the formula for Newton's method

by using the finite-difference approximation, for a small :

The secant method can be interpreted as a method in which the derivative is replaced by an approximation and is thus a quasi-Newton method.

If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against φ  1.6). However, Newton's method requires the evaluation of both and its derivative at every step, while the secant method only requires the evaluation of . Therefore, the secant method may occasionally be faster in practice. For instance, if we assume that evaluating takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor φ2  2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor 2), so the secant method is faster. If, however, we consider parallel processing for the evaluation of the derivative, Newton's method proves its worth, being faster in time, though still spending more steps.

Generalization

Broyden's method is a generalization of the secant method to more than one dimension.

The following graph shows the function f in red and the last secant line in bold blue. In the graph, the x intercept of the secant line seems to be a good approximation of the root of f.

Secant method example code result.svg

Computational example

Below, the secant method is implemented in the Python programming language.

It is then applied to find a root of the function f(x) = x2 − 612 with initial points and

defsecant_method(f,x0,x1,iterations):"""Return the root calculated using the secant method."""foriinrange(iterations):x2=x1-f(x1)*(x1-x0)/float(f(x1)-f(x0))x0,x1=x1,x2# Apply a stopping criterion here (see below)returnx2deff_example(x):returnx**2-612root=secant_method(f_example,10,30,5)print(f"Root: {root}")# Root: 24.738633748750722

It is very important to have a good stopping criterion above, otherwise, due to limited numerical precision of floating point numbers, the algorithm can return inaccurate results if running for too many iterations. For example, the loop above can stop when one of these is reached first: abs(x0 - x1) < tol, or abs(x0/x1-1) < tol, or abs(f(x1)) < tol. [2]

Notes

  1. Papakonstantinou, Joanna; Tapia, Richard (2013). "Origin and evolution of the secant method in one dimension". American Mathematical Monthly. 120 (6): 500–518. doi:10.4169/amer.math.monthly.120.06.500. JSTOR   10.4169/amer.math.monthly.120.06.500. S2CID   17645996 via JSTOR.
  2. "MATLAB TUTORIAL for the First Course. Part 1.3: Secant Methods".

See also

Related Research Articles

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

<span class="mw-page-title-main">Cube root</span> Number whose cube is a given number

In mathematics, a cube root of a number x is a number y such that y3 = x. All nonzero real numbers have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. For example, the real cube root of 8, denoted , is 2, because 23 = 8, while the other cube roots of 8 are and . The three cube roots of −27i are:

<span class="mw-page-title-main">Bisection method</span> Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.

In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolation to approximate the inverse of f. This algorithm is rarely used on its own, but it is important because it forms part of the popular Brent's method.

In optimization, line search is a basic iterative approach to find a local minimum of an objective function . It first finds a descent direction along which the objective function will be reduced, and then computes a step size that determines how far should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly.

Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.

Methods of computing square roots are algorithms for approximating the non-negative square root of a positive real number . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these methods typically construct a series of increasingly accurate approximations.

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.

The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.

In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. It is most useful for accelerating the convergence of a sequence that is converging linearly. A precursor form was known to Seki Kōwa and applied to the rectification of the circle, i.e., to the calculation of π.

Quasi-Newton methods are methods used to find either zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.

In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.

In numerical analysis, Steffensen's method is an iterative method for root-finding named after Johan Frederik Steffensen which is similar to Newton's method, but with certain situational advantages. In particular, Steffensen's method achieves similar quadratic convergence, but without using derivatives, as required for Newton's method.

Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form . The method was published by Avram Sidi.

<span class="mw-page-title-main">Plotting algorithms for the Mandelbrot set</span> Algorithms and methods of plotting the Mandelbrot set on a computing device

There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies, since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

References