This article needs additional citations for verification .(March 2008) (Learn how and when to remove this template message) |

In mathematics, the **Newton polygon** is a tool for understanding the behaviour of polynomials over local fields.

In the original case, the local field of interest was the field of formal Laurent series in the indeterminate *X*, i.e. the field of fractions of the formal power series ring

*K*[[X]],

over *K*, where *K* was the real number or complex number field. This is still of considerable utility with respect to Puiseux expansions. The Newton polygon is an effective device for understanding the leading terms

*aX*^{r}

of the power series expansion solutions to equations

*P*(*F*(*X*)) = 0

where *P* is a polynomial with coefficients in *K*[*X*], the polynomial ring; that is, implicitly defined algebraic functions. The exponents *r* here are certain rational numbers, depending on the branch chosen; and the solutions themselves are power series in

*K*[[Y]]

with *Y* = *X*^{1/d} for a denominator *d* corresponding to the branch. The Newton polygon gives an effective, algorithmic approach to calculating *d*.

After the introduction of the p-adic numbers, it was shown that the Newton polygon is just as useful in questions of ramification for local fields, and hence in algebraic number theory. Newton polygons have also been useful in the study of elliptic curves.

A priori, given a polynomial over a field, the behaviour of the roots (assuming it has roots) will be unknown. Newton polygons provide one technique for the study of the behaviour of the roots.

Let be a local field with discrete valuation and let

with . Then the Newton polygon of is defined to be the lower convex hull of the set of points

ignoring the points with . Restated geometrically, plot all of these points *P*_{i} on the *xy*-plane. Let's assume that the points indices increase from left to right (*P*_{0} is the leftmost point, *P*_{n} is the rightmost point). Then, starting at *P*_{0}, draw a ray straight down parallel with the *y*-axis, and rotate this ray counter-clockwise until it hits the point *P*_{k1} (not necessarily *P*_{1}). Break the ray here. Now draw a second ray from *P*_{k1} straight down parallel with the *y*-axis, and rotate this ray counter-clockwise until it hits the point *P*_{k2}. Continue until the process reaches the point *P*_{n}; the resulting polygon (containing the points *P*_{0}, *P*_{k1}, *P*_{k2}, ..., *P*_{km}, *P*_{n}) is the Newton polygon.

Another, perhaps more intuitive way to view this process is this : consider a rubber band surrounding all the points *P*_{0}, ..., *P*_{n}. Stretch the band upwards, such that the band is stuck on its lower side by some of the points (the points act like nails, partially hammered into the xy plane). The vertices of the Newton polygon are exactly those points.

For a neat diagram of this see Ch6 §3 of "Local Fields" by JWS Cassels, LMS Student Texts 3, CUP 1986. It is on p99 of the 1986 paperback edition.

Newton polygons are named after Isaac Newton, who first described them and some of their uses in correspondence from the year 1676 addressed to Henry Oldenburg.^{ [1] }

A Newton Polygon is sometimes a special case of a Newton polytope, and can be used to construct asymptotic solutions of two-variable polynomial equations like

Another application of the Newton polygon comes from the following result:

Let

be the slopes of the line segments of the Newton polygon of (as defined above) arranged in increasing order, and let

be the corresponding lengths of the line segments projected onto the x-axis (i.e. if we have a line segment stretching between the points and then the length is ). Then for each integer , has exactly roots with valuation .

In the context of a valuation, we are given certain information in the form of the valuations of elementary symmetric functions of the roots of a polynomial, and require information on the valuations of the actual roots, in an algebraic closure. This has aspects both of ramification theory and singularity theory. The valid inferences possible are to the valuations of power sums, by means of Newton's identities.

In physics, the **Lorentz transformations** are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

In mathematics, particularly linear algebra and functional analysis, a **spectral theorem** is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In physical science and mathematics, the **Legendre functions***P*_{λ}, *Q*_{λ} and **associated Legendre functions***P*^{μ}_{λ}, *Q*^{μ}_{λ}, and **Legendre functions of the second kind**, *Q _{n}*, are all solutions of Legendre's differential equation. The Legendre polynomials and the associated Legendre polynomials are also solutions of the differential equation in special cases, which, by virtue of being polynomials, have a large number of additional properties, mathematical structure, and applications. For these polynomial solutions, see the separate Wikipedia articles.

In mathematics and theoretical physics, the term **quantum group** denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups.

In the general theory of relativity the **Einstein field equations** relate the geometry of spacetime to the distribution of matter within it.

The **Gram–Charlier A series**, and the **Edgeworth series** are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

The spectrum of a linear operator that operates on a Banach space consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard **decomposition** into three parts:

In mathematics, a **Killing vector field**, named after Wilhelm Killing, is a vector field on a Riemannian manifold that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point on an object the same distance in the direction of the **Killing vector** will not distort distances on the object.

In mathematics, a Lie algebra is **semisimple** if it is a direct sum of simple Lie algebras.

In linear algebra, an **eigenvector** or **characteristic vector** of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding **eigenvalue**, often denoted by , is the factor by which the eigenvector is scaled.

In mathematics, **Schur polynomials**, named after Issai Schur, are certain symmetric polynomials in *n* variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, **skew Schur polynomials** are associated with pairs of partitions and have similar properties to Schur polynomials.

The notion of **cylindric algebra**, invented by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Indeed, cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.

In mathematics, the **Jack function** is a generalization of the **Jack polynomial**, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

In mathematics, **Macdonald polynomials***P*_{λ}(*x*; *t*,*q*) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable *t*, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable *t* can be replaced by several different variables *t*=(*t*_{1},...,*t*_{k}), one for each of the *k* orbits of roots in the affine root system. The Macdonald polynomials are polynomials in *n* variables *x*=(*x*_{1},...,*x*_{n}), where *n* is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

The **Jenkins–Traub algorithm for polynomial zeros** is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In mathematics, the **Hall–Littlewood polynomials** are symmetric functions depending on a parameter *t* and a partition λ. They are Schur functions when *t* is 0 and monomial symmetric functions when *t* is 1 and are special cases of Macdonald polynomials. They were first defined indirectly by Philip Hall using the Hall algebra, and later defined directly by Dudley E. Littlewood (1961).

In mathematics, a **Rota–Baxter algebra** is an associative algebra, together with a particular linear map *R* which satisfies the **Rota–Baxter identity**. It appeared first in the work of the American mathematician Glen E. Baxter in the realm of probability theory. Baxter's work was further explored from different angles by Gian-Carlo Rota, Pierre Cartier, and Frederic V. Atkinson, among others. Baxter’s derivation of this identity that later bore his name emanated from some of the fundamental results of the famous probabilist Frank Spitzer in random walk theory.

In algebra, a multivariate polynomial

In mathematics, the **Iwasawa algebra** Λ(*G*) of a profinite group *G* is a variation of the group ring of *G* with *p*-adic coefficients that take the topology of *G* into account. More precisely, Λ(*G*) is the inverse limit of the group rings **Z**_{p}(*G*/*H*) as *H* runs through the open normal subgroups of *G*. Commutative Iwasawa algebras were introduced by Iwasawa (1959) in his study of **Z**_{p} extensions in Iwasawa theory, and non-commutative Iwasawa algebras of compact *p*-adic analytic groups were introduced by Lazard (1965).

In algebra, a **λ-ring** or **lambda ring** is a commutative ring together with some operations λ^{n} on it that behave like the exterior powers of vector spaces. Many rings considered in K-theory carry a natural λ-ring structure. λ-rings also provide a powerful formalism for studying an action of the symmetric functions on the ring of polynomials, recovering and extending many classical results.

- ↑ Egbert Brieskorn, Horst Knörrer (1986).
*Plane Algebraic Curves*, pp. 370–383.

- Goss, David (1996),
*Basic structures of function field arithmetic*, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)],**35**, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-642-61480-4, ISBN 978-3-540-61087-8, MR 1423131 - Gouvêa, Fernando: p-adic numbers: An introduction. Springer Verlag 1993. p. 199.

Wikimedia Commons has media related to . Newton polygon |

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.