Descartes' rule of signs

Last updated

In mathematics, Descartes' rule of signs, first described by René Descartes in his work La Géométrie , is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients), and that the difference between these two numbers is always even. This implies, in particular, that if the number of sign changes is zero or one, then there are exactly zero or one positive roots, respectively.

Contents

By a homographic transformation of the variable, one may use Descartes' rule of signs for getting a similar information on the number of roots in any interval. This is the basic idea of Budan's theorem and Budan–Fourier theorem. By repeating the division of an interval into two intervals, one gets eventually a list of disjoints intervals containing together all real roots of the polynomial, and containing each exactly one real root. Descartes rule of signs and homographic transformations of the variable are, nowadays, the basis of the fastest algorithms for computer computation of real roots of polynomials (see Real-root isolation).

Descartes himself used the transformation x → –x for using his rule for getting information of the number of negative roots.

Descartes' rule of signs

Positive roots

The rule states that if the terms of a single-variable polynomial with real coefficients are ordered by descending variable exponent, then the number of positive roots of the polynomial is either equal to the number of sign differences between consecutive nonzero coefficients, or is less than it by an even number. Multiple roots of the same value are counted separately.

Negative roots

As a corollary of the rule, the number of negative roots is the number of sign changes after multiplying the coefficients of odd-power terms by −1, or fewer than it by an even number. This procedure is equivalent to substituting the negation of the variable for the variable itself. For example, to find the number of negative roots of , we equivalently ask how many positive roots there are for in

Using Descartes' rule of signs on gives the number of positive roots of g, and since it gives the number of positive roots of f, which is the same as the number of negative roots of f.

Example: real roots

The polynomial

has one sign change between the second and third terms (the sequence of pairs of successive signs is ++, +, ). Therefore it has exactly one positive root. Note that the sign of the leading coefficient needs to be considered. To find the number of negative roots, change the signs of the coefficients of the terms with odd exponents, i.e., apply Descartes' rule of signs to the polynomial , to obtain a second polynomial

This polynomial has two sign changes (the sequence of pairs of successive signs is +, ++, +), meaning that this second polynomial has two or zero positive roots; thus the original polynomial has two or zero negative roots.

In fact, the factorization of the first polynomial is

so the roots are 1 (twice) and +1 (once).

The factorization of the second polynomial is

So here, the roots are +1 (twice) and 1 (once), the negation of the roots of the original polynomial.

Nonreal roots

Any nth degree polynomial has exactly n roots in the complex plane, if counted according to multiplicity. So if f(x) is a polynomial which does not have a root at 0 (which can be determined by inspection) then the minimum number of nonreal roots is equal to

where p denotes the maximum number of positive roots, q denotes the maximum number of negative roots (both of which can be found using Descartes' rule of signs), and n denotes the degree of the equation.

Example: zero coefficients, nonreal roots

The polynomial

has one sign change, so the maximum number of positive real roots is 1. From

we can tell that the polynomial has no negative real roots. So the minimum number of nonreal roots is

Since nonreal roots of a polynomial with real coefficients must occur in conjugate pairs, we can see that x3 − 1 has exactly 2 nonreal roots and 1 real (and positive) root.

Special case

The subtraction of only multiples of 2 from the maximal number of positive roots occurs because the polynomial may have nonreal roots, which always come in pairs since the rule applies to polynomials whose coefficients are real. Thus if the polynomial is known to have all real roots, this rule allows one to find the exact number of positive and negative roots. Since it is easy to determine the multiplicity of zero as a root, the sign of all roots can be determined in this case.

Generalizations

If the real polynomial P has k real positive roots counted with multiplicity, then for every a > 0 there are at least k changes of sign in the sequence of coefficients of the Taylor series of the function eaxP(x). For a sufficiently large, there are exactly k such changes of sign. [1] [2]

In the 1970s Askold Georgevich Khovanskiǐ developed the theory of fewnomials that generalises Descartes' rule. [3] The rule of signs can be thought of as stating that the number of real roots of a polynomial is dependent on the polynomial's complexity, and that this complexity is proportional to the number of monomials it has, not its degree. Khovanskiǐ showed that this holds true not just for polynomials but for algebraic combinations of many transcendental functions, the so-called Pfaffian functions.

See also

Notes

  1. D. R. Curtiss, Recent extensions of Descartes' rule of signs, Annals of Mathematics., Vol. 19, No. 4, 1918, pp. 251–278.
  2. Vladimir P. Kostov, A mapping defined by the Schur–Szegő composition, Comptes Rendus Acad. Bulg. Sci. tome 63, No. 7, 2010, pp. 943–952.
  3. Khovanskiǐ, A.G. (1991). Fewnomials. Translations of Mathematical Monographs. Translated from the Russian by Smilka Zdravkovska. Providence, RI: American Mathematical Society. p. 88. ISBN   0-8218-4547-0. Zbl   0728.12002.

This article incorporates material from Descartes' rule of signs on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Related Research Articles

Polynomial In mathematics, sum of products of variables, power of variables, and coefficients

In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents of variables. An example of a polynomial of a single indeterminate, x, is x2 − 4x + 7. An example in three variables is x3 + 2xyz2yz + 1.

In algebra, a quadratic equation is any equation that can be rearranged in standard form as

The fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.

In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without having to compute them.

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeroes, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeroes of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeroes, expressed either as floating point numbers or as small isolating intervals, or disks for complex roots.

Root of unity Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

In mathematics, an irreducible polynomial is, roughly speaking, a non-constant polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the field or ring to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial x2 − 2 is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as if it is considered as a polynomial with real coefficients. One says that the polynomial x2 − 2 is irreducible over the integers but not over the reals.

Zero of a function Element of the domain where functions value is zero

In mathematics, a zero of a real-, complex-, or generally vector-valued function , is a member of the domain of such that vanishes at ; that is, the function attains the value of 0 at , or equivalently, is the solution to the equation . A "zero" of a function is thus an input value that produces an output of .

Quartic function Polynomial function of degree four

In algebra, a quartic function is a function of the form

Bisection method Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In mathematics, an algebraic function is a function that can be defined as the root of a polynomial equation. Quite often algebraic functions are algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are:

In mathematics, the Sturm sequence of a univariate polynomial p is a sequence of polynomials associated with p and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of p located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of p.

In control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time invariant (LTI) control system. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.

In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a set of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.

In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometrical relation between the roots of a polynomial P and the roots of its derivative P′. The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of P′ all lie within the convex hull of the roots of P, that is the smallest convex polygon containing the roots of P. When P has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial equation has a root at a given point is the multiplicity of that root.

A system of polynomial equations is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k.

In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan de Boislaurent.

In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one real root of the polynomial, and, together, contain all the real roots of the polynomial.