Multiplicity (mathematics)

Last updated

In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root.

Contents

The notion of multiplicity is important to be able to count correctly without specifying exceptions (for example, double roots counted twice). Hence the expression, "counted with multiplicity".

If multiplicity is ignored, this may be emphasized by counting the number of distinct elements, as in "the number of distinct roots". However, whenever a set (as opposed to multiset) is formed, multiplicity is automatically ignored, without requiring use of the term "distinct".

Multiplicity of a prime factor

In prime factorization, the multiplicity of a prime factor is its -adic valuation. For example, the prime factorization of the integer 60 is

60 = 2 × 2 × 3 × 5,

the multiplicity of the prime factor 2 is 2, while the multiplicity of each of the prime factors 3 and 5 is 1. Thus, 60 has four prime factors allowing for multiplicities, but only three distinct prime factors.

Multiplicity of a root of a polynomial

Let be a field and be a polynomial in one variable with coefficients in . An element is a root of multiplicity of if there is a polynomial such that and . If , then a is called a simple root. If , then is called a multiple root.

For instance, the polynomial has 1 and −4 as roots, and can be written as . This means that 1 is a root of multiplicity 2, and −4 is a simple root (of multiplicity 1). The multiplicity of a root is the number of occurrences of this root in the complete factorization of the polynomial, by means of the fundamental theorem of algebra.

If is a root of multiplicity of a polynomial, then it is a root of multiplicity of the derivative of that polynomial, unless the characteristic of the underlying field is a divisor of k, in which case is a root of multiplicity at least of the derivative.

The discriminant of a polynomial is zero if and only if the polynomial has a multiple root.

Behavior of a polynomial function near a multiple root

Graph of x + 2x - 7x + 4 with a simple root (multiplicity 1) at x=-4 and a root of multiplicity 2 at x=1. The graph crosses the x axis at the simple root. It is tangent to the x axis at the multiple root and does not cross it, since the multiplicity is even. Polynomial roots multiplicity.svg
Graph of x + 2x  7x + 4 with a simple root (multiplicity 1) at x=−4 and a root of multiplicity 2 at x=1. The graph crosses the x axis at the simple root. It is tangent to the x axis at the multiple root and does not cross it, since the multiplicity is even.

The graph of a polynomial function f touches the x-axis at the real roots of the polynomial. The graph is tangent to it at the multiple roots of f and not tangent at the simple roots. The graph crosses the x-axis at roots of odd multiplicity and does not cross it at roots of even multiplicity.

A non-zero polynomial function is everywhere non-negative if and only if all its roots have even multiplicity and there exists an such that .

Multiplicity of a solution of a nonlinear system of equations

For an equation with a single variable solution , the multiplicity is if

and

In other words, the differential functional , defined as the derivative of a function at , vanishes at for up to . Those differential functionals span a vector space, called the Macaulay dual space at , [1] and its dimension is the multiplicity of as a zero of .

Let be a system of equations of variables with a solution where is a mapping from to or from to . There is also a Macaulay dual space of differential functionals at in which every functional vanishes at . The dimension of this Macaulay dual space is the multiplicity of the solution to the equation . The Macaulay dual space forms the multiplicity structure of the system at the solution. [2] [3]

For example, the solution of the system of equations in the form of with

is of multiplicity 3 because the Macaulay dual space

is of dimension 3, where denotes the differential functional applied on a function at the point .

The multiplicity is always finite if the solution is isolated, is perturbation invariant in the sense that a -fold solution becomes a cluster of solutions with a combined multiplicity under perturbation in complex spaces, and is identical to the intersection multiplicity on polynomial systems.

Intersection multiplicity

In algebraic geometry, the intersection of two sub-varieties of an algebraic variety is a finite union of irreducible varieties. To each component of such an intersection is attached an intersection multiplicity. This notion is local in the sense that it may be defined by looking at what occurs in a neighborhood of any generic point of this component. It follows that without loss of generality, we may consider, in order to define the intersection multiplicity, the intersection of two affines varieties (sub-varieties of an affine space).

Thus, given two affine varieties V1 and V2, consider an irreducible component W of the intersection of V1 and V2. Let d be the dimension of W, and P be any generic point of W. The intersection of W with d hyperplanes in general position passing through P has an irreducible component that is reduced to the single point P. Therefore, the local ring at this component of the coordinate ring of the intersection has only one prime ideal, and is therefore an Artinian ring. This ring is thus a finite dimensional vector space over the ground field. Its dimension is the intersection multiplicity of V1 and V2 at W.

This definition allows us to state Bézout's theorem and its generalizations precisely.

This definition generalizes the multiplicity of a root of a polynomial in the following way. The roots of a polynomial f are points on the affine line, which are the components of the algebraic set defined by the polynomial. The coordinate ring of this affine set is where K is an algebraically closed field containing the coefficients of f. If is the factorization of f, then the local ring of R at the prime ideal is This is a vector space over K, which has the multiplicity of the root as a dimension.

This definition of intersection multiplicity, which is essentially due to Jean-Pierre Serre in his book Local Algebra, works only for the set theoretic components (also called isolated components) of the intersection, not for the embedded components. Theories have been developed for handling the embedded case (see Intersection theory for details).

In complex analysis

Let z0 be a root of a holomorphic function f, and let n be the least positive integer such that, the nth derivative of f evaluated at z0 differs from zero. Then the power series of f about z0 begins with the nth term, and f is said to have a root of multiplicity (or “order”) n. If n = 1, the root is called a simple root. [4]

We can also define the multiplicity of the zeroes and poles of a meromorphic function. If we have a meromorphic function take the Taylor expansions of g and h about a point z0, and find the first non-zero term in each (denote the order of the terms m and n respectively) then if m = n, then the point has non-zero value. If then the point is a zero of multiplicity If , then the point has a pole of multiplicity

Related Research Articles

In commutative algebra, the prime spectrum of a ring R is the set of all prime ideals of R, and is usually denoted by ; in algebraic geometry it is simultaneously a topological space equipped with the sheaf of rings .

The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.

In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not specific to commutative rings. This distinction results from the high number of fundamental properties of commutative rings that do not extend to noncommutative rings.

In mathematics, Hilbert's Nullstellensatz is a theorem that establishes a fundamental relationship between geometry and algebra. This relationship is the basis of algebraic geometry. It relates algebraic sets to ideals in polynomial rings over algebraically closed fields. This relationship was discovered by David Hilbert, who proved the Nullstellensatz in his second major paper on invariant theory in 1893.

Bézout's theorem is a statement in algebraic geometry concerning the number of common zeros of n polynomials in n indeterminates. In its original form the theorem states that in general the number of common zeros equals the product of the degrees of the polynomials. It is named after Étienne Bézout.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

<span class="mw-page-title-main">Algebraic variety</span> Mathematical object studied in the field of algebraic geometry

Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.

<span class="mw-page-title-main">Algebraic curve</span> Curve defined as zeros of polynomials

In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.

<span class="mw-page-title-main">Zero of a function</span> Point where functions value is zero

In mathematics, a zero of a real-, complex-, or generally vector-valued function , is a member of the domain of such that vanishes at ; that is, the function attains the value of 0 at , or equivalently, is a solution to the equation . A "zero" of a function is thus an input value that produces an output of 0.

<span class="mw-page-title-main">Affine variety</span> Algebraic variety defined within an affine space

In algebraic geometry, an affine algebraic set is the set of the common zeros over an algebraically closed field k of some family of polynomials in the polynomial ring An affine variety or affine algebraic variety, is an affine algebraic set such that the ideal generated by the defining polynomials is prime.

<span class="mw-page-title-main">Projective variety</span>

In algebraic geometry, a projective variety over an algebraically closed field k is a subset of some projective n-space over k that is the zero-locus of some finite family of homogeneous polynomials of n + 1 variables with coefficients in k, that generate a prime ideal, the defining ideal of the variety. Equivalently, an algebraic variety is projective if it can be embedded as a Zariski closed subvariety of .

In mathematics, a Cohen–Macaulay ring is a commutative ring with some of the algebro-geometric properties of a smooth variety, such as local equidimensionality. Under mild assumptions, a local ring is Cohen–Macaulay exactly when it is a finitely generated free module over a regular local subring. Cohen–Macaulay rings play a central role in commutative algebra: they form a very broad class, and yet they are well understood in many ways.

In commutative algebra and field theory, the Frobenius endomorphism is a special endomorphism of commutative rings with prime characteristic p, an important class that includes finite fields. The endomorphism maps every element to its p-th power. In certain contexts it is an automorphism, but this is not true in general.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In mathematics, intersection theory is one of the main branches of algebraic geometry, where it gives information about the intersection of two subvarieties of a given variety. The theory for varieties is older, with roots in Bézout's theorem on curves and elimination theory. On the other hand, the topological theory more quickly reached a definitive form.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function. A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials.

The concept of a Projective space plays a central role in algebraic geometry. This article aims to define the notion in terms of abstract algebraic geometry and to describe some basic uses of projective spaces.

In algebraic geometry, the main theorem of elimination theory states that every projective scheme is proper. A version of this theorem predates the existence of scheme theory. It can be stated, proved, and applied in the following more classical setting. Let k be a field, denote by the n-dimensional projective space over k. The main theorem of elimination theory is the statement that for any n and any algebraic variety V defined over k, the projection map sends Zariski-closed subsets to Zariski-closed subsets.

References

  1. D.J. Bates, A.J. Sommese, J.D. Hauenstein and C.W. Wampler (2013). Numerically Solving Polynomial Systems with Bertini. SIAM. pp. 186–187.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. B.H. Dayton, T.-Y. Li and Z. Zeng (2011). "Multiple zeros of nonlinear systems". Mathematics of Computation. 80 (276): 2143–2168. arXiv: 2103.05738 . doi:10.1090/s0025-5718-2011-02462-2. S2CID   9867417.
  3. Macaulay, F.S. (1916). The Algebraic Theory of Modular Systems. Cambridge Univ. Press 1994, reprint of 1916 original.
  4. (Krantz 1999, p. 70)