Newton polygon

Last updated
Construction of the Newton polygon of the polynomial
P
(
X
)
=
1
+
5
X
+
1
/
5
X
2
+
35
X
3
+
25
X
5
+
625
X
6
{\displaystyle P(X)=1+5X+1/5X^{2}+35X^{3}+25X^{5}+625X^{6}}
with respect to the 5-adic valuation. Newton-polygon.gif
Construction of the Newton polygon of the polynomial with respect to the 5-adic valuation.

In mathematics, the Newton polygon is a tool for understanding the behaviour of polynomials over local fields.

Contents

In the original case, the local field of interest was the field of formal Laurent series in the indeterminate X, i.e. the field of fractions of the formal power series ring

K[[X]],

over K, where K was the real number or complex number field. This is still of considerable utility with respect to Puiseux expansions. The Newton polygon is an effective device for understanding the leading terms

aXr

of the power series expansion solutions to equations

P(F(X)) = 0

where P is a polynomial with coefficients in K[X], the polynomial ring; that is, implicitly defined algebraic functions. The exponents r here are certain rational numbers, depending on the branch chosen; and the solutions themselves are power series in

K[[Y]]

with Y = X1/d for a denominator d corresponding to the branch. The Newton polygon gives an effective, algorithmic approach to calculating d.

After the introduction of the p-adic numbers, it was shown that the Newton polygon is just as useful in questions of ramification for local fields, and hence in algebraic number theory. Newton polygons have also been useful in the study of elliptic curves.

Definition

A priori, given a polynomial over a field, the behaviour of the roots (assuming it has roots) will be unknown. Newton polygons provide one technique for the study of the behaviour of the roots.

Let be a local field with discrete valuation and let

with . Then the Newton polygon of is defined to be the lower convex hull of the set of points

ignoring the points with . Restated geometrically, plot all of these points Pi on the xy-plane. Let's assume that the points indices increase from left to right (P0 is the leftmost point, Pn is the rightmost point). Then, starting at P0, draw a ray straight down parallel with the y-axis, and rotate this ray counter-clockwise until it hits the point Pk1 (not necessarily P1). Break the ray here. Now draw a second ray from Pk1 straight down parallel with the y-axis, and rotate this ray counter-clockwise until it hits the point Pk2. Continue until the process reaches the point Pn; the resulting polygon (containing the points P0, Pk1, Pk2, ..., Pkm, Pn) is the Newton polygon.

Another, perhaps more intuitive way to view this process is this : consider a rubber band surrounding all the points P0, ..., Pn. Stretch the band upwards, such that the band is stuck on its lower side by some of the points (the points act like nails, partially hammered into the xy plane). The vertices of the Newton polygon are exactly those points.

For a neat diagram of this see Ch6 §3 of "Local Fields" by JWS Cassels, LMS Student Texts 3, CUP 1986. It is on p99 of the 1986 paperback edition.

History

Newton polygons are named after Isaac Newton, who first described them and some of their uses in correspondence from the year 1676 addressed to Henry Oldenburg. [1]

Applications

A Newton Polygon is sometimes a special case of a Newton polytope, and can be used to construct asymptotic solutions of two-variable polynomial equations like

This diagram shows the Newton polygon for P(x,y) = 3x y - xy + 2xy - xy, with positive monomials in red and negative monomials in cyan. Faces are labelled with the limiting terms they correspond to. Diagram of a Newton Polygon Convex hull.svg
This diagram shows the Newton polygon for P(x,y) = 3xyxy + 2xyxy, with positive monomials in red and negative monomials in cyan. Faces are labelled with the limiting terms they correspond to.

Another application of the Newton polygon comes from the following result:

Let

be the slopes of the line segments of the Newton polygon of (as defined above) arranged in increasing order, and let

be the corresponding lengths of the line segments projected onto the x-axis (i.e. if we have a line segment stretching between the points and then the length is ). Then for each integer , has exactly roots with valuation .

Symmetric function explanation

In the context of a valuation, we are given certain information in the form of the valuations of elementary symmetric functions of the roots of a polynomial, and require information on the valuations of the actual roots, in an algebraic closure. This has aspects both of ramification theory and singularity theory. The valid inferences possible are to the valuations of power sums, by means of Newton's identities.

See also

Related Research Articles

Lorentz transformation Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Legendre function

In physical science and mathematics, the Legendre functionsPλ, Qλ and associated Legendre functionsPμ
λ
, Qμ
λ
, and Legendre functions of the second kind, Qn, are all solutions of Legendre's differential equation. The Legendre polynomials and the associated Legendre polynomials are also solutions of the differential equation in special cases, which, by virtue of being polynomials, have a large number of additional properties, mathematical structure, and applications. For these polynomial solutions, see the separate Wikipedia articles.

Quantum group Algebraic construct of interest in theoretical physics

In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups.

Einstein field equations Field equations in general relativity

In the general theory of relativity the Einstein field equations relate the geometry of spacetime to the distribution of matter within it.

The Gram–Charlier A series, and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

The spectrum of a linear operator that operates on a Banach space consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

In mathematics, a Killing vector field, named after Wilhelm Killing, is a vector field on a Riemannian manifold that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point on an object the same distance in the direction of the Killing vector will not distort distances on the object.

Semisimple Lie algebra Direct sum of simple Lie algebras

In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.

The notion of cylindric algebra, invented by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Indeed, cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In mathematics, the Hall–Littlewood polynomials are symmetric functions depending on a parameter t and a partition λ. They are Schur functions when t is 0 and monomial symmetric functions when t is 1 and are special cases of Macdonald polynomials. They were first defined indirectly by Philip Hall using the Hall algebra, and later defined directly by Dudley E. Littlewood (1961).

In mathematics, a Rota–Baxter algebra is an associative algebra, together with a particular linear map R which satisfies the Rota–Baxter identity. It appeared first in the work of the American mathematician Glen E. Baxter in the realm of probability theory. Baxter's work was further explored from different angles by Gian-Carlo Rota, Pierre Cartier, and Frederic V. Atkinson, among others. Baxter’s derivation of this identity that later bore his name emanated from some of the fundamental results of the famous probabilist Frank Spitzer in random walk theory.

In algebra, a multivariate polynomial

In mathematics, the Iwasawa algebra Λ(G) of a profinite group G is a variation of the group ring of G with p-adic coefficients that take the topology of G into account. More precisely, Λ(G) is the inverse limit of the group rings Zp(G/H) as H runs through the open normal subgroups of G. Commutative Iwasawa algebras were introduced by Iwasawa (1959) in his study of Zp extensions in Iwasawa theory, and non-commutative Iwasawa algebras of compact p-adic analytic groups were introduced by Lazard (1965).

In algebra, a λ-ring or lambda ring is a commutative ring together with some operations λn on it that behave like the exterior powers of vector spaces. Many rings considered in K-theory carry a natural λ-ring structure. λ-rings also provide a powerful formalism for studying an action of the symmetric functions on the ring of polynomials, recovering and extending many classical results.

References

  1. Egbert Brieskorn, Horst Knörrer (1986). Plane Algebraic Curves, pp. 370–383.