In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation p(x) = 0 for a given polynomial p(x). One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to some root of the polynomial, no matter what initial guess is chosen. However, for computer computation, more efficient methods are known, with which it is guaranteed to find all roots (see Root-finding algorithm § Roots of polynomials) or all real roots (see Real-root isolation).
This method is named in honour of the French mathematician, Edmond Laguerre.
The algorithm of the Laguerre method to find one root of a polynomial p(x) of degree n is:
If a root has been found, the corresponding linear factor can be removed from p. This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of p can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude.
The fundamental theorem of algebra states that every nth degree polynomial can be written in the form
so that are the roots of the polynomial. If we take the natural logarithm of both sides, we find that
Denote the logarithmic derivative by
and the negated second derivative by
We then make what Acton (1970) [ page needed ] calls a 'drastic set of assumptions', that the root we are looking for, say, is a short distance, away from our guess and all the other roots are all clustered together, at some further distance If we denote these distances by
and
or exactly,
then our equation for may be written as
and the expression for becomes
Solving these equations for we find that
where in this case, the square root of the (possibly) complex number is chosen to produce largest absolute value of the denominator and make as small as possible; equivalently, it satisfies:
where denotes real part of a complex number, and is the complex conjugate of or
where the square root of a complex number is chosen to have a non-negative real part.
For small values of this formula differs from the offset of the third order Halley's method by an error of so convergence close to a root will be cubic as well.
Even if the 'drastic set of assumptions' does not work well for some particular polynomial p(x), then p(x) can be transformed into a related polynomial r for which the assumptions are viable; e.g. by first shifting the origin towards a suitable complex number w, giving a second polynomial q(x) = p(x − w), that give distinct roots clearly distinct magnitudes, if necessary (which it will be if some roots are complex conjugates).
After that, getting a third polynomial r from q(x) by repeatedly applying the root squaring transformation from Graeffe's method, enough times to make the smaller roots significantly smaller than the largest root (and so, clustered comparatively nearer to zero). The approximate root from Graeffe's method, can then be used to start the new iteration for Laguerre's method on r. An approximate root for p(x) may then be obtained straightforwardly from that for r.
If we make the even more extreme assumption that the terms in corresponding to the roots are negligibly small compared to the root this leads to Newton's method.
If x is a simple root of the polynomial then Laguerre's method converges cubically whenever the initial guess, is close enough to the root On the other hand, when is a multiple root convergence is merely linear, with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration.
A major advantage of Laguerre's method is that it is almost guaranteed to converge to some root of the polynomial no matter where the initial approximation is chosen. This is in contrast to other methods such as the Newton–Raphson method and Stephensen's method, which notoriously fail to converge for poorly chosen initial guesses. Laguerre's method may even converge to a complex root of the polynomial, because the radicand of the square root may be of a negative number, in the formula for the correction, given above – manageable so long as complex numbers can be conveniently accommodated for the calculation. This may be considered an advantage or a liability depending on the application to which the method is being used.
Empirical evidence shows that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as a default, and prefer better understood methods such as the Jenkins–Traub algorithm, for which more solid theory has been developed and whose limits are known.
The algorithm is fairly simple to use, compared to other "sure-fire" methods, and simple enough for hand calculation, aided by a pocket calculator, if a computer is not available. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.
The imaginary unit or unit imaginary number is a solution to the quadratic equation x2 + 1 = 0. Although there is no real number with this property, i can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of i in a complex number is 2 + 3i.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:
In mathematics, the lemniscate constantϖ is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of π for the circle. Equivalently, the perimeter of the lemniscate is 2ϖ. The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. It also appears in evaluation of the gamma and beta function at certain rational values. The symbol ϖ is a cursive variant of π; see Pi § Variant pi.
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
Methods of computing square roots are algorithms for approximating the non-negative square root of a positive real number . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these methods typically construct a series of increasingly accurate approximations.
In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard Bairstow. The algorithm finds the roots in complex conjugate pairs using only real arithmetic.
In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.
The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial p(z) ∈ [z] or transcendental function. It is the Julia set of the meromorphic function z ↦ z − p(z)/p′(z) which is given by Newton's method. When there are no attractive cycles (of order greater than 1), it divides the complex plane into regions Gk, each of which is associated with a root ζk of the polynomial, k = 1, …, deg(p). In this way the Newton fractal is similar to the Mandelbrot set, and like other fractals it exhibits an intricate appearance arising from a simple description. It is relevant to numerical analysis because it shows that (outside the region of quadratic convergence) the Newton method can be very sensitive to its choice of start point.
The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.
In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a multiset of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.
The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".
In numerical analysis, Steffensen's method is an iterative method for root-finding named after Johan Frederik Steffensen which is similar to Newton's method, but with certain situational advantages. In particular, Steffensen's method achieves similar quadratic convergence, but without using derivatives, as required for Newton's method.
Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.
Finding polynomial roots is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century, algebra meant essentially theory of polynomial equations.