This article includes a list of general references, but it lacks sufficient corresponding inline citations .(February 2020) |
Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
Muller's method proceeds according to a third-order recurrence relation similar to the second-order recurrence relation of the secant method. Whereas the secant method proceeds by constructing a line through two points on the graph of f corresponding to the last two iterative approximations and then uses the line's root as the next approximation at every iteration, by contrast, Muller's method uses three points corresponding to the last three iterative approximations, constructs a parabola through these three points, and then uses a root of the parabola as the next approximation at every iteration.
Muller's method uses three initial approximations of the root, x0, x1 and x2, and determines the next approximation x3 by considering the intersection of the x-axis with the parabola through (x0, f(x0)), (x1, f(x1)) and (x2, f(x2)).
Consider the quadratic polynomial
1 |
that passes through (x0, f(x0)), (x1, f(x1)) and (x2, f(x2)). Define the differences
and
Substituting each of the three points (x0, f(x0)), (x1, f(x1)) and (x2, f(x2)) into equation ( 1 ) and solving simultaneously for a, b, and c gives
The quadratic formula is then applied to ( 1 ) to determine x3 as
The sign preceding the radical term is chosen to match the sign of b to ensure the next iterate is closest to x2, giving
Once x3 is determined, the process is repeated. Note that due to the radical expression in the denominator, iterates can be complex even when the previous iterates are all real. This is in contrast with other root-finding algorithms like the secant method, Sidi's generalized secant method or Newton's method, whose iterates will remain real if one starts with real numbers. Having complex iterates can be an advantage (if one is looking for complex roots) or a disadvantage (if it is known that all roots are real), depending on the problem.
For well-behaved functions, the order of convergence of Muller's method is approximately 1.839 and exactly the tribonacci constant. This can be compared with approximately 1.618, exactly the golden ratio, for the secant method and with exactly 2 for Newton's method. So, the secant method makes less progress per iteration than Muller's method and Newton's method makes more progress.
More precisely, if ξ denotes a single root of f (so f(ξ) = 0 and f'(ξ) ≠ 0), f is three times continuously differentiable, and the initial guesses x0, x1, and x2 are taken sufficiently close to ξ, then the iterates satisfy
where μ ≈ 1.84 is the positive solution of , the defining equation for the tribonacci constant.
Muller's method fits a parabola, i.e. a second-order polynomial, to the last three obtained points f(xk-1), f(xk-2) and f(xk-3) in each iteration. One can generalize this and fit a polynomial pk,m(x) of degree m to the last m+1 points in the kth iteration. Our parabola yk is written as pk,2 in this notation. The degree m must be 1 or larger. The next approximation xk is now one of the roots of the pk,m, i.e. one of the solutions of pk,m(x)=0. Taking m=1 we obtain the secant method whereas m=2 gives Muller's method.
Muller calculated that the sequence {xk} generated this way converges to the root ξ with an order μm where μm is the positive solution of .
As m approaches infinity the positive solution for the equation approaches 2. The method is much more difficult though for m>2 than it is for m=1 or m=2 because it is much harder to determine the roots of a polynomial of degree 3 or higher. Another problem is that there seems no prescription of which of the roots of pk,m to pick as the next approximation xk for m>2.
These difficulties are overcome by Sidi's generalized secant method which also employs the polynomial pk,m. Instead of trying to solve pk,m(x)=0, the next approximation xk is calculated with the aid of the derivative of pk,m at xk-1 in this method.
Below, Muller's method is implemented in the Python programming language. It takes as parameters the three initial estimates of the root, as well as the desired decimals places of accuracy and the maximum number of iterations. The program is then applied to find a root of the function f(x) = x2 − 612.
fromcmathimportsqrt# Use the complex sqrt as we may generate complex numbersdeffunc(x):return(x**2)-612defmuller(x0,x1,x2,decimal_places,maximum_iterations):iteration_counter=0iterates=[x0,x1,x2]solution_found=Falsewhilenotsolution_foundanditeration_counter<maximum_iterations:final_index=len(iterates)-1h0=iterates[final_index-1]-iterates[final_index-2]h1=iterates[final_index]-iterates[final_index-1]f_x0=func(iterates[final_index-2])f_x1=func(iterates[final_index-1])f_x2=func(iterates[final_index])delta0=(f_x1-f_x0)/h0delta1=(f_x2-f_x1)/h1coeff_a=(delta1-delta0)/(h1+h0)coeff_b=coeff_a*h1+delta1coeff_c=f_x2sqrt_delta=sqrt(pow(coeff_b,2)-4*coeff_a*coeff_c)denominators=[coeff_b-sqrt_delta,coeff_b+sqrt_delta]# Take the higher-magnitude denominatornext_iterate=iterates[final_index]-(2*coeff_c)/max(denominators,key=abs)iterates.append(next_iterate)solution_found=abs(func(next_iterate))<pow(10,-decimal_places)iteration_counter=iteration_counter+1ifsolution_found:print("Solution found: {}".format(next_iterate))else:print("No solution found.")muller(10,20,30,9,20)# Solution found: (24.73863375370596+0j)
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
In mathematics, a quadratic equation is an equation that can be rearranged in standard form as where the variable x represents an unknown number, and a, b, and c represent known numbers, where a ≠ 0. The numbers a, b, and c are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term.
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros. For functions from the real numbers to real numbers or from the complex numbers to the complex numbers, these are expressed either as floating-point numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating intervals for real roots or disks for complex roots.
In algebra, a cubic equation in one variable is an equation of the form in which a is not zero.
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.
In mathematics, a cube root of a number x is a number y such that y3 = x. All nonzero real numbers have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. For example, the real cube root of 8, denoted , is 2, because 23 = 8, while the other cube roots of 8 are and . The three cube roots of −27i are:
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method, so it is considered a quasi-Newton method. Historically, it is as an evolution of the method of false position, which predates Newton's method by over 3000 years.
In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
In mathematics, an algebraic function is a function that can be defined as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are:
In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,
In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolation to approximate the inverse of f. This algorithm is rarely used on its own, but it is important because it forms part of the popular Brent's method.
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. It is most useful for accelerating the convergence of a sequence that is converging linearly. A precursor form was known to Seki Kōwa and applied to the rectification of the circle, i.e., to the calculation of π.
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.
In numerical analysis, Steffensen's method is an iterative method for numerical root-finding named after Johan Frederik Steffensen that is similar to the secant method and to Newton's method. Steffensen's method achieves a quadratic order of convergence without using derivatives, whereas Newton's method converges quadratically but requires derivatives and the secant method does not require derivatives but also converges less quickly than quadratically.
Fast inverse square root, sometimes referred to as Fast InvSqrt or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates , the reciprocal of the square root of a 32-bit floating-point number in IEEE 754 floating-point format. The algorithm is best known for its implementation in 1999 in Quake III Arena, a first-person shooter video game heavily based on 3D graphics. With subsequent hardware advancements, especially the x86 SSE instruction rsqrtss
, this algorithm is not generally the best choice for modern computers, though it remains an interesting historical example.
In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.
Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form . The method was published by Avram Sidi.