Householder's method

Last updated

In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has a rate of convergence of d + 1.

Contents

These methods are named after the American mathematician Alston Scott Householder.

Method

Householder's method is a numerical algorithm for solving the equation f(x) = 0. In this case, the function f has to be a function of one real variable. The method consists of a sequence of iterations

beginning with an initial guess x0. [1]

If f is a d + 1 times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy:[ citation needed ]

, for some

This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order d + 1 or better. Furthermore, when close enough to a, it commonly is the case that for some . In particular,

Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large d. The Ostrowski index expresses the error reduction in the number of function evaluations instead of the iteration count. [2]

Motivation

First approach

Suppose f is analytic in a neighborhood of a and f(a) = 0. Then f has a Taylor series at a and its constant term is zero. Because this constant term is zero, the function f(x) / (xa) will have a Taylor series at a and, when f ′ (a) ≠ 0, its constant term will not be zero. Because that constant term is not zero, it follows that the reciprocal (xa) / f(x) has a Taylor series at a, which we will write as and its constant term c0 will not be zero. Using that Taylor series we can write When we compute its d-th derivative, we note that the terms for k = 1, ..., d conveniently vanish: using big O notation. We thus get that the correction term that we add to x = xn to get a value of xn+1 that is closer to a is: If a is the zero of f that is closest to x then the second factor goes to 1 as d goes to infinity and goes to a.

Second approach

Suppose x = a is a simple root. Then near x = a, (1/f)(x) is a meromorphic function. Suppose we have the Taylor expansion: around a point b that is closer to a than it is to any other zero of f. By König's theorem, we have:

These suggest that Householder's iteration might be a good convergent iteration. The actual proof of the convergence is also based on these ideas.

The methods of lower order

Householder's method of order 1 is just Newton's method, since:

For Householder's method of order 2 one gets Halley's method, since the identities and result in In the last line, is the update of the Newton iteration at the point . This line was added to demonstrate where the difference to the simple Newton's method lies.

The third order method is obtained from the identity of the third order derivative of 1/f and has the formula and so on.

Example

The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation . He observed that there should be a solution close to 2. Replacing y = x + 2 transforms the equation into . The Taylor series of the reciprocal function starts with The result of applying Householder's methods of various orders at x = 0 is also obtained by dividing neighboring coefficients of the latter power series. For the first orders one gets the following values after just one iteration step: For an example, in the case of the 3rd order, .

dx1
10.100000000000000000000000000000000
20.094339622641509433962264150943396
30.094558429973238180196253345227475
40.094551282051282051282051282051282
50.094551486538216154140615031261962
60.094551481438752142436492263099118
70.094551481543746895938379484125812
80.094551481542336756233561913325371
90.094551481542324837086869382419375
100.094551481542326678478801765822985

As one can see, there are a little bit more than d correct decimal places for each order d. The first one hundred digits of the correct solution are 0.0945514815423265914823865405793029638573061056282391803041285290453121899834836671462672817771577578.

Let's calculate the values for some lowest order,

And using following relations,

1st order;
2nd order;
3rd order;
x1st (Newton)2nd (Halley)3rd order4th order
x10.1000000000000000000000000000000000.0943396226415094339622641509433950.0945584299732381801962533452274750.09455128205128
x20.0945681211041852181656277827248440.0945514815401642147171079662275000.094551481542326591482567319958483
x30.0945514816981993028838237035442660.0945514815423265914823865405793030.094551481542326591482386540579303
x40.0945514815423265914960648471537140.0945514815423265914823865405793030.094551481542326591482386540579303
x50.094551481542326591482386540579303
x60.094551481542326591482386540579303


Derivation

An exact derivation of Householder's methods starts from the Padé approximation of order d + 1 of the function, where the approximant with linear numerator is chosen. Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator.

The Padé approximation has the form The rational function has a zero at .

Just as the Taylor polynomial of degree d has d + 1 coefficients that depend on the function f, the Padé approximation also has d + 1 coefficients dependent on f and its derivatives. More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant. Therefore, has to hold.

One could determine the Padé approximant starting from the Taylor polynomial of f using Euclid's algorithm. However, starting from the Taylor polynomial of 1/f is shorter and leads directly to the given formula. Since has to be equal to the inverse of the desired rational function, we get after multiplying with in the power the equation .

Now, solving the last equation for the zero of the numerator results in .

This implies the iteration formula .

Relation to Newton's method

Householder's method applied to the real-valued function f(x) is the same as applying Newton's method to find the zeros of the function: In particular, d = 1 gives Newton's method unmodified and d = 2 gives Halley's method.

Related Research Articles

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann.

<span class="mw-page-title-main">Riemann zeta function</span> Analytic function in mathematics

The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter ζ (zeta), is a mathematical function of a complex variable defined as for , and its analytic continuation elsewhere.

<span class="mw-page-title-main">Taylor series</span> Mathematical approximation of a function

In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.

In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros. For functions from the real numbers to real numbers or from the complex numbers to the complex numbers, these are expressed either as floating-point numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating intervals for real roots or disks for complex roots.

<span class="mw-page-title-main">Runge–Kutta methods</span> Family of implicit and explicit iterative methods

In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.

In mathematics, a Dirichlet series is any series of the form where s is complex, and is a complex sequence. It is a special case of general Dirichlet series.

In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.

In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.

<span class="mw-page-title-main">Bisection method</span> Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.

In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

Methods of computing square roots are algorithms for approximating the non-negative square root of a positive real number . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these methods typically construct a series of increasingly accurate approximations.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

Simple rational approximation (SRA) is a subset of interpolating methods using rational functions. Especially, SRA interpolates a given function with a specific rational function whose poles and zeros are simple, which means that there is no multiplicity in poles and zeros. Sometimes, it only implies simple poles.

In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.

References

  1. Householder, Alston Scott (1970). The Numerical Treatment of a Single Nonlinear Equation . McGraw-Hill. p.  169. ISBN   0-07-030465-3.
  2. Ostrowski, A. M. (1966). Solution of Equations and Systems of Equations. Pure and Applied Mathematics. Vol. 9 (Second ed.). New York: Academic Press.