Complete homogeneous symmetric polynomial

Last updated

In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials.

Contents

Definition

The complete homogeneous symmetric polynomial of degree k in n variables X1, ..., Xn, written hk for k = 0, 1, 2, ..., is the sum of all monomials of total degree k in the variables. Formally,

The formula can also be written as:

Indeed, lp is just the multiplicity of p in the sequence ik.

The first few of these polynomials are

Thus, for each nonnegative integer k, there exists exactly one complete homogeneous symmetric polynomial of degree k in n variables.

Another way of rewriting the definition is to take summation over all sequences ik, without condition of ordering ipip + 1:

here mp is the multiplicity of number p in the sequence ik.

For example

The polynomial ring formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring.

Examples

The following lists the n basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of n.

For n = 1:

For n = 2:

For n = 3:

Properties

Generating function

The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in t:

(this is called the generating function, or generating series, for the complete homogeneous symmetric polynomials). Here each fraction in the final expression is the usual way to represent the formal geometric series that is a factor in the middle expression. The identity can be justified by considering how the product of those geometric series is formed: each factor in the product is obtained by multiplying together one term chosen from each geometric series, and every monomial in the variables Xi is obtained for exactly one such choice of terms, and comes multiplied by a power of t equal to the degree of the monomial.

The formula above can be seen as a special case of the MacMahon master theorem. The right hand side can be interpreted as where and . On the left hand side, one can identify the complete homogeneous symmetric polynomials as special cases of the multinomial coefficient that appears in the MacMahon expression.

Performing some standard computations, we can also write the generating function as which is the power series expansion of the plethystic exponential of (and note that is precisely the j-th power sum symmetric polynomial).

Relation with the elementary symmetric polynomials

There is a fundamental relation between the elementary symmetric polynomials and the complete homogeneous ones:

which is valid for all m > 0, and any number of variables n. The easiest way to see that it holds is from an identity of formal power series in t for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones, which can also be written in terms of plethystic exponentials as:

(this is actually an identity of polynomials in t, because after en(X1, ..., Xn) the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series 1 (equivalently, plethystic exponentials satisfy the usual properties of an exponential), and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of tm. A somewhat more direct way to understand that relation is to consider the contributions in the summation involving a fixed monomial Xα of degree m. For any subset S of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product XS of those variables as term from es(X1, ..., Xn), where s = #S, and the monomial Xα/XS from hms(X1, ..., Xn); this contribution has coefficient (−1)s. The relation then follows from the fact that

by the binomial formula, where l < m denotes the number of distinct variables occurring (with nonzero exponent) in Xα. Since e0(X1, ..., Xn) and h0(X1, ..., Xn) are both equal to 1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations:

and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations

and so forth, that allows doing the inverse. The first n elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the ring of symmetric functions. It has a ring automorphism that interchanges the sequences of the n elementary and first n complete homogeneous symmetric functions.

The set of complete homogeneous symmetric polynomials of degree 1 to n in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring

This can be formulated by saying that

form a transcendence basis of the ring of symmetric polynomials in X1, ..., Xn with integral coefficients (as is also true for the elementary symmetric polynomials). The same is true with the ring of integers replaced by any other commutative ring. These statements follow from analogous statements for the elementary symmetric polynomials, due to the indicated possibility of expressing either kind of symmetric polynomials in terms of the other kind.

Relation with the Stirling numbers

The evaluation at integers of complete homogeneous polynomials and elementary symmetric polynomials is related to Stirling numbers:

Relation with the monomial symmetric polynomials

The polynomial hk(X1, ..., Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, ..., Xn, for instance

Relation with power sums

Newton's identities for homogeneous symmetric polynomials give the simple recursive formula

where and pk is the k-th power sum symmetric polynomial: , as above.

For small we have

Relation with symmetric tensors

Consider an n-dimensional vector space V and a linear operator M : VV with eigenvalues X1, X2, ..., Xn. Denote by Symk(V) its kth symmetric tensor power and MSym(k) the induced operator Symk(V) → Symk(V).

Proposition:

The proof is easy: consider an eigenbasis ei for M. The basis in Symk(V) can be indexed by sequences i1i2 ≤ ... ≤ ik, indeed, consider the symmetrizations of

.

All such vectors are eigenvectors for MSym(k) with eigenvalues

hence this proposition is true.

Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of Schur polynomials as traces over Schur functors, which can be seen as the Weyl character formula for GL(V).

Complete homogeneous symmetric polynomial with variables shifted by 1

If we replace the variables for , the symmetric polynomial can be written as a linear combination of the , for ,

The proof, as found in Lemma 3.5 of, [1] relies on the combinatorial properties of increasing -tuples where .

See also

Related Research Articles

In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.

In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

<span class="mw-page-title-main">Lindemann–Weierstrass theorem</span> On algebraic independence of exponentials of linearly independent algebraic numbers over Q

In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:

In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.

In algebra and in particular in algebraic combinatorics, the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates. Among other things, this ring plays an important role in the representation theory of the symmetric group.

In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in Faà di Bruno's formula.

<span class="mw-page-title-main">AM–GM inequality</span> Arithmetic mean is greater than or equal to geometric mean

In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same.

In mathematics, more specifically in the theory of Lie algebras, the Poincaré–Birkhoff–Witt theorem is a result giving an explicit description of the universal enveloping algebra of a Lie algebra. It is named after Henri Poincaré, Garrett Birkhoff, and Ernst Witt.

In mathematics the monomial basis of a polynomial ring is its basis that consists of all monomials. The monomials form a basis because every polynomial may be uniquely written as a finite linear combination of monomials.

In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means.

In mathematics, a symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).

In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial P is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for each positive integer dn, and it is formed by adding together all distinct products of d distinct variables.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.

In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.

In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients. However, not every symmetric polynomial with integral coefficients is generated by integral combinations of products of power-sum polynomials: they are a generating set over the rationals, but not over the integers.

In mathematics, a form (i.e. a homogeneous polynomial) h(x) of degree 2m in the real n-dimensional vector x is sum of squares of forms (SOS) if and only if there exist forms of degree m such that

In mathematics, a Stanley–Reisner ring, or face ring, is a quotient of a polynomial algebra over a field by a square-free monomial ideal. Such ideals are described more geometrically in terms of finite simplicial complexes. The Stanley–Reisner ring construction is a basic tool within algebraic combinatorics and combinatorial commutative algebra. Its properties were investigated by Richard Stanley, Melvin Hochster, and Gerald Reisner in the early 1970s.

References

  1. Gomezllata Marmolejo, Esteban (2022). The norm of a canonical isomorphism of determinant line bundles (Thesis). University of Oxford.