In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard Bairstow. [1] [ non-primary source needed ] The algorithm finds the roots in complex conjugate pairs using only real arithmetic.
See root-finding algorithm for other algorithms.
Bairstow's approach is to use Newton's method to adjust the coefficients u and v in the quadratic until its roots are also roots of the polynomial being solved. The roots of the quadratic may then be determined, and the polynomial may be divided by the quadratic to eliminate those roots. This process is then iterated until the polynomial becomes quadratic or linear, and all the roots have been determined.
Long division of the polynomial to be solved
by yields a quotient and a remainder such that
A second division of by is performed to yield a quotient and remainder with
The variables , and the are functions of and . They can be found recursively as follows.
The quadratic evenly divides the polynomial when
Values of and for which this occurs can be discovered by picking starting values and iterating Newton's method in two dimensions
until convergence occurs. This method to find the zeroes of polynomials can thus be easily implemented with a programming language or even a spreadsheet.
The task is to determine a pair of roots of the polynomial
As first quadratic polynomial one may choose the normalized polynomial formed from the leading three coefficients of f(x),
The iteration then produces the table
Nr | u | v | step length | roots |
---|---|---|---|---|
0 | 1.833333333333 | −5.500000000000 | 5.579008780071 | −0.916666666667±2.517990821623 |
1 | 2.979026068546 | −0.039896784438 | 2.048558558641 | −1.489513034273±1.502845921479 |
2 | 3.635306053091 | 1.900693009946 | 1.799922838287 | −1.817653026545±1.184554563945 |
3 | 3.064938039761 | 0.193530875538 | 1.256481376254 | −1.532469019881±1.467968126819 |
4 | 3.461834191232 | 1.385679731101 | 0.428931413521 | −1.730917095616±1.269013105052 |
5 | 3.326244386565 | 0.978742927192 | 0.022431883898 | −1.663122193282±1.336874153612 |
6 | 3.333340909351 | 1.000022701147 | 0.000023931927 | −1.666670454676±1.333329555414 |
7 | 3.333333333340 | 1.000000000020 | 0.000000000021 | −1.666666666670±1.333333333330 |
8 | 3.333333333333 | 1.000000000000 | 0.000000000000 | −1.666666666667±1.333333333333 |
After eight iterations the method produced a quadratic factor that contains the roots −1/3 and −3 within the represented precision. The step length from the fourth iteration on demonstrates the superlinear speed of convergence.
Bairstow's algorithm inherits the local quadratic convergence of Newton's method, except in the case of quadratic factors of multiplicity higher than 1, when convergence to that factor is linear. A particular kind of instability is observed when the polynomial has odd degree and only one real root. Quadratic factors that have a small value at this real root tend to diverge to infinity.
The images represent pairs . Points in the upper half plane t > 0 correspond to a linear factor with roots , that is . Points in the lower half plane t < 0 correspond to quadratic factors with roots , that is, , so in general . Points are colored according to the final point of the Bairstow iteration, black points indicate divergent behavior.
The first image is a demonstration of the single real root case. The second indicates that one can remedy the divergent behavior by introducing an additional real root, at the cost of slowing down the speed of convergence. One can also in the case of odd degree polynomials first find a real root using Newton's method and/or an interval shrinking method, so that after deflation a better-behaved even-degree polynomial remains. The third image corresponds to the example above.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
In mathematics, a quadratic equation is an equation that can be rearranged in standard form as where x represents an unknown value, and a, b, and c represent known numbers, where a ≠ 0. The numbers a, b, and c are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term.
In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .
In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.
In elementary algebra, the quadratic formula is a closed-form expression describing the solutions of a quadratic equation. Other ways of solving quadratic equations, such as completing the square, yield the same solutions.
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).
In algebra, a cubic equation in one variable is an equation of the form in which a is not zero.
In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before the 20th century, the distinction was unclear between a polynomial and its associated polynomial function; so "quadratic polynomial" and "quadratic function" were almost synonymous. This is still the case in many elementary courses, where both terms are often abbreviated as "quadratic".
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In algebra, a quartic function is a function of the form
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial
The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring.
In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one real root of the polynomial, and, together, contain all the real roots of the polynomial.