Bairstow's method

Last updated

In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard Bairstow. [1] [ non-primary source needed ] The algorithm finds the roots in complex conjugate pairs using only real arithmetic.

Contents

See root-finding algorithm for other algorithms.

Description of the method

Bairstow's approach is to use Newton's method to adjust the coefficients u and v in the quadratic until its roots are also roots of the polynomial being solved. The roots of the quadratic may then be determined, and the polynomial may be divided by the quadratic to eliminate those roots. This process is then iterated until the polynomial becomes quadratic or linear, and all the roots have been determined.

Long division of the polynomial to be solved

by yields a quotient and a remainder such that

A second division of by is performed to yield a quotient and remainder with

The variables , and the are functions of and . They can be found recursively as follows.

The quadratic evenly divides the polynomial when

Values of and for which this occurs can be discovered by picking starting values and iterating Newton's method in two dimensions

until convergence occurs. This method to find the zeroes of polynomials can thus be easily implemented with a programming language or even a spreadsheet.

Example

The task is to determine a pair of roots of the polynomial

As first quadratic polynomial one may choose the normalized polynomial formed from the leading three coefficients of f(x),

The iteration then produces the table

Iteration steps of Bairstow's method
Nruvstep lengthroots
01.833333333333−5.5000000000005.579008780071−0.916666666667±2.517990821623
12.979026068546−0.0398967844382.048558558641−1.489513034273±1.502845921479
23.6353060530911.9006930099461.799922838287−1.817653026545±1.184554563945
33.0649380397610.1935308755381.256481376254−1.532469019881±1.467968126819
43.4618341912321.3856797311010.428931413521−1.730917095616±1.269013105052
53.3262443865650.9787429271920.022431883898−1.663122193282±1.336874153612
63.3333409093511.0000227011470.000023931927−1.666670454676±1.333329555414
73.3333333333401.0000000000200.000000000021−1.666666666670±1.333333333330
83.3333333333331.0000000000000.000000000000−1.666666666667±1.333333333333

After eight iterations the method produced a quadratic factor that contains the roots −1/3 and −3 within the represented precision. The step length from the fourth iteration on demonstrates the superlinear speed of convergence.

Performance

Bairstow's algorithm inherits the local quadratic convergence of Newton's method, except in the case of quadratic factors of multiplicity higher than 1, when convergence to that factor is linear. A particular kind of instability is observed when the polynomial has odd degree and only one real root. Quadratic factors that have a small value at this real root tend to diverge to infinity.

Bairstow-fractal 1 0 0 0 0 m1 scale 03.png Bairstow-fractal 1 0 0 0 0 m1 0 scale 3.png Bairstow-fractal 6 11 m33 m33 11 6 scale 03.png

The images represent pairs . Points in the upper half plane t > 0 correspond to a linear factor with roots , that is . Points in the lower half plane t < 0 correspond to quadratic factors with roots , that is, , so in general . Points are colored according to the final point of the Bairstow iteration, black points indicate divergent behavior.

The first image is a demonstration of the single real root case. The second indicates that one can remedy the divergent behavior by introducing an additional real root, at the cost of slowing down the speed of convergence. One can also in the case of odd degree polynomials first find a real root using Newton's method and/or an interval shrinking method, so that after deflation a better-behaved even-degree polynomial remains. The third image corresponds to the example above.

Related Research Articles

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f, and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then

In algebra, a quadratic equation is any equation that can be rearranged in standard form as

<span class="mw-page-title-main">Square root</span> Number whose square is a given number

In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, refers to a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is a factorization of the integer 15, and (x – 2)(x + 2) is a factorization of the polynomial x2 – 4.

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).

<span class="mw-page-title-main">Quadratic function</span> Polynomial function of degree two

In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before 20th century, the distinction was unclear between a polynomial and its associated polynomial function; so "quadratic polynomial" and "quadratic function" were almost synonymous. This is still the case in many elementary courses, where both terms are often abbreviated as "quadratic".

In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.

<span class="mw-page-title-main">Quartic function</span> Polynomial function of degree four

In algebra, a quartic function is a function of the form

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.

<span class="mw-page-title-main">Bisection method</span> Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root of a real number. Arithmetically, it means given , a procedure for finding a number which when multiplied by itself, yields ; algebraically, it means a procedure for finding the non-negative root of the equation ; geometrically, it means given two line segments, a procedure for constructing their geometric mean.

<span class="mw-page-title-main">Bring radical</span> Real root of the polynomial x^5+x+a

In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial

In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one real root of the polynomial, and, together, contain all the real roots of the polynomial.

Numerical certification is the process of verifying the correctness of a candidate solution to a system of equations. In (numerical) computational mathematics, such as numerical algebraic geometry, candidate solutions are computed algorithmically, but there is the possibility that errors have corrupted the candidates. For instance, in addition to the inexactness of input data and candidate solutions, numerical errors or errors in the discretization of the problem may result in corrupted candidate solutions. The goal of numerical certification is to provide a certificate which proves which of these candidates are, indeed, approximate solutions.

References

  1. Bairstow, Leonard (1920). "Appendix: The Solution of Algebraic Equations with Numerical Coefficients in the Case where Several Pairs of Complex Roots exist". Applied Aerodynamics. London: Longmans, Green and Company. pp. 551–560.