In algebra and in particular in algebraic combinatorics, the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group.
The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative.
The study of symmetric functions is based on that of symmetric polynomials. In a polynomial ring in some finite set of indeterminates, a polynomial is called symmetric if it stays the same whenever the indeterminates are permuted in any way. More formally, there is an action by ring automorphisms of the symmetric group Sn on the polynomial ring in n indeterminates, where a permutation acts on a polynomial by simultaneously substituting each of the indeterminates for another according to the permutation used. The invariants for this action form the subring of symmetric polynomials. If the indeterminates are X1, ..., Xn, then examples of such symmetric polynomials are
and
A somewhat more complicated example is X13X2X3 + X1X23X3 + X1X2X33 + X13X2X4 + X1X23X4 + X1X2X43 + ... where the summation goes on to include all products of the third power of some variable and two other variables. There are many specific kinds of symmetric polynomials, such as elementary symmetric polynomials, power sum symmetric polynomials, monomial symmetric polynomials, complete homogeneous symmetric polynomials, and Schur polynomials.
Most relations between symmetric polynomials do not depend on the number n of indeterminates, other than that some polynomials in the relation might require n to be large enough in order to be defined. For instance the Newton's identity for the third power sum polynomial p3 leads to
where the denote elementary symmetric polynomials; this formula is valid for all natural numbers n, and the only notable dependency on it is that ek(X1,...,Xn) = 0 whenever n < k. One would like to write this as an identity
that does not depend on n at all, and this can be done in the ring of symmetric functions. In that ring there are nonzero elements ek for all integers k ≥ 1, and any element of the ring can be given by a polynomial expression in the elements ek.
A ring of symmetric functions can be defined over any commutative ring R, and will be denoted ΛR; the basic case is for R = Z. The ring ΛR is in fact a graded R-algebra. There are two main constructions for it; the first one given below can be found in (Stanley, 1999), and the second is essentially the one given in (Macdonald, 1979).
The easiest (though somewhat heavy) construction starts with the ring of formal power series over R in infinitely (countably) many indeterminates; the elements of this power series ring are formal infinite sums of terms, each of which consists of a coefficient from R multiplied by a monomial, where each monomial is a product of finitely many finite powers of indeterminates. One defines ΛR as its subring consisting of those power series S that satisfy
Note that because of the second condition, power series are used here only to allow infinitely many terms of a fixed degree, rather than to sum terms of all possible degrees. Allowing this is necessary because an element that contains for instance a term X1 should also contain a term Xi for every i > 1 in order to be symmetric. Unlike the whole power series ring, the subring ΛR is graded by the total degree of monomials: due to condition 2, every element of ΛR is a finite sum of homogeneous elements of ΛR (which are themselves infinite sums of terms of equal degree). For every k ≥ 0, the element ek ∈ ΛR is defined as the formal sum of all products of k distinct indeterminates, which is clearly homogeneous of degree k.
Another construction of ΛR takes somewhat longer to describe, but better indicates the relationship with the rings R[X1,...,Xn]Sn of symmetric polynomials in n indeterminates. For every n there is a surjective ring homomorphism ρn from the analogous ring R[X1,...,Xn+1]Sn+1 with one more indeterminate onto R[X1,...,Xn]Sn, defined by setting the last indeterminate Xn+1 to 0. Although ρn has a non-trivial kernel, the nonzero elements of that kernel have degree at least (they are multiples of X1X2...Xn+1). This means that the restriction of ρn to elements of degree at most n is a bijective linear map, and ρn(ek(X1,...,Xn+1)) = ek(X1,...,Xn) for all k ≤ n. The inverse of this restriction can be extended uniquely to a ring homomorphism φn from R[X1,...,Xn]Sn to R[X1,...,Xn+1]Sn+1, as follows for instance from the fundamental theorem of symmetric polynomials. Since the images φn(ek(X1,...,Xn)) = ek(X1,...,Xn+1) for k = 1,...,n are still algebraically independent over R, the homomorphism φn is injective and can be viewed as a (somewhat unusual) inclusion of rings; applying φn to a polynomial amounts to adding all monomials containing the new indeterminate obtained by symmetry from monomials already present. The ring ΛR is then the "union" (direct limit) of all these rings subject to these inclusions. Since all φn are compatible with the grading by total degree of the rings involved, ΛR obtains the structure of a graded ring.
This construction differs slightly from the one in (Macdonald, 1979). That construction only uses the surjective morphisms ρn without mentioning the injective morphisms φn: it constructs the homogeneous components of ΛR separately, and equips their direct sum with a ring structure using the ρn. It is also observed that the result can be described as an inverse limit in the category of graded rings. That description however somewhat obscures an important property typical for a direct limit of injective morphisms, namely that every individual element (symmetric function) is already faithfully represented in some object used in the limit construction, here a ring R[X1,...,Xd]Sd. It suffices to take for d the degree of the symmetric function, since the part in degree d of that ring is mapped isomorphically to rings with more indeterminates by φn for all n ≥ d. This implies that for studying relations between individual elements, there is no fundamental difference between symmetric polynomials and symmetric functions.
The name "symmetric function" for elements of ΛR is a misnomer: in neither construction are the elements functions, and in fact, unlike symmetric polynomials, no function of independent variables can be associated to such elements (for instance e1 would be the sum of all infinitely many variables, which is not defined unless restrictions are imposed on the variables). However the name is traditional and well established; it can be found both in (Macdonald, 1979), which says (footnote on p. 12)
The elements of Λ (unlike those of Λn) are no longer polynomials: they are formal infinite sums of monomials. We have therefore reverted to the older terminology of symmetric functions.
(here Λn denotes the ring of symmetric polynomials in n indeterminates), and also in (Stanley, 1999).
To define a symmetric function one must either indicate directly a power series as in the first construction, or give a symmetric polynomial in n indeterminates for every natural number n in a way compatible with the second construction. An expression in an unspecified number of indeterminates may do both, for instance
can be taken as the definition of an elementary symmetric function if the number of indeterminates is infinite, or as the definition of an elementary symmetric polynomial in any finite number of indeterminates. Symmetric polynomials for the same symmetric function should be compatible with the homomorphisms ρn (decreasing the number of indeterminates is obtained by setting some of them to zero, so that the coefficients of any monomial in the remaining indeterminates is unchanged), and their degree should remain bounded. (An example of a family of symmetric polynomials that fails both conditions is ; the family fails only the second condition.) Any symmetric polynomial in n indeterminates can be used to construct a compatible family of symmetric polynomials, using the homomorphisms ρi for i < n to decrease the number of indeterminates, and φi for i ≥ n to increase the number of indeterminates (which amounts to adding all monomials in new indeterminates obtained by symmetry from monomials already present).
The following are fundamental examples of symmetric functions.
There is no power sum symmetric function p0: although it is possible (and in some contexts natural) to define as a symmetric polynomial in n variables, these values are not compatible with the morphisms ρn. The "discriminant" is another example of an expression giving a symmetric polynomial for all n, but not defining any symmetric function. The expressions defining Schur polynomials as a quotient of alternating polynomials are somewhat similar to that for the discriminant, but the polynomials sλ(X1,...,Xn) turn out to be compatible for varying n, and therefore do define a symmetric function.
For any symmetric function P, the corresponding symmetric polynomials in n indeterminates for any natural number n may be designated by P(X1,...,Xn). The second definition of the ring of symmetric functions implies the following fundamental principle:
This is because one can always reduce the number of variables by substituting zero for some variables, and one can increase the number of variables by applying the homomorphisms φn; the definition of those homomorphisms assures that φn(P(X1,...,Xn)) = P(X1,...,Xn+1) (and similarly for Q) whenever n ≥ d. See a proof of Newton's identities for an effective application of this principle.
The ring of symmetric functions is a convenient tool for writing identities between symmetric polynomials that are independent of the number of indeterminates: in ΛR there is no such number, yet by the above principle any identity in ΛR automatically gives identities the rings of symmetric polynomials over R in any number of indeterminates. Some fundamental identities are
which shows a symmetry between elementary and complete homogeneous symmetric functions; these relations are explained under complete homogeneous symmetric polynomial.
the Newton identities, which also have a variant for complete homogeneous symmetric functions:
Important properties of ΛR include the following.
Property 2 is the essence of the fundamental theorem of symmetric polynomials. It immediately implies some other properties:
This final point applies in particular to the family (hi)i>0 of complete homogeneous symmetric functions. If R contains the field of rational numbers, it applies also to the family (pi)i>0 of power sum symmetric functions. This explains why the first n elements of each of these families define sets of symmetric polynomials in n variables that are free polynomial generators of that ring of symmetric polynomials.
The fact that the complete homogeneous symmetric functions form a set of free polynomial generators of ΛR already shows the existence of an automorphism ω sending the elementary symmetric functions to the complete homogeneous ones, as mentioned in property 3. The fact that ω is an involution of ΛR follows from the symmetry between elementary and complete homogeneous symmetric functions expressed by the first set of relations given above.
The ring of symmetric functions ΛZ is the Exp ring of the integers Z. It is also a lambda-ring in a natural fashion; in fact it is the universal lambda-ring in one generator.
The first definition of ΛR as a subring of allows the generating functions of several sequences of symmetric functions to be elegantly expressed. Contrary to the relations mentioned earlier, which are internal to ΛR, these expressions involve operations taking place in R[[X1,X2,...;t]] but outside its subring ΛR[[t]], so they are meaningful only if symmetric functions are viewed as formal power series in indeterminates Xi. We shall write "(X)" after the symmetric functions to stress this interpretation.
The generating function for the elementary symmetric functions is
Similarly one has for complete homogeneous symmetric functions
The obvious fact that explains the symmetry between elementary and complete homogeneous symmetric functions. The generating function for the power sum symmetric functions can be expressed as
((Macdonald, 1979) defines P(t) as Σk>0 pk(X)tk−1, and its expressions therefore lack a factor t with respect to those given here). The two final expressions, involving the formal derivatives of the generating functions E(t) and H(t), imply Newton's identities and their variants for the complete homogeneous symmetric functions. These expressions are sometimes written as
which amounts to the same, but requires that R contain the rational numbers, so that the logarithm of power series with constant term 1 is defined (by ).
Let be the ring of symmetric functions and a commutative algebra with unit element. An algebra homomorphism is called a specialization. [1]
Example:
In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1.
In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.
In algebra, a monic polynomial is a non-zero univariate polynomial in which the leading coefficient is equal to 1. That is to say, a monic polynomial is one that can be written as
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.
In mathematics, especially in the area of abstract algebra known as ring theory, a free algebra is the noncommutative analogue of a polynomial ring since its elements may be described as "polynomials" with non-commuting variables. Likewise, the polynomial ring may be regarded as a free commutative algebra.
In mathematics, a monomial order is a total order on the set of all (monic) monomials in a given polynomial ring, satisfying the property of respecting multiplication, i.e.,
In mathematics, a symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial P is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for each positive integer d ≤ n, and it is formed by adding together all distinct products of d distinct variables.
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.
In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials.
In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients. However, not every symmetric polynomial with integral coefficients is generated by integral combinations of products of power-sum polynomials: they are a generating set over the rationals, but not over the integers.
In mathematics, and in particular in the field of algebra, a Hilbert–Poincaré series, named after David Hilbert and Henri Poincaré, is an adaptation of the notion of dimension to the context of graded algebraic structures. It is a formal power series in one indeterminate, say , where the coefficient of gives the dimension of the sub-structure of elements homogeneous of degree . It is closely related to the Hilbert polynomial in cases when the latter exists; however, the Hilbert–Poincaré series describes the rank in every degree, while the Hilbert polynomial describes it only in all but finitely many degrees, and therefore provides less information. In particular the Hilbert–Poincaré series cannot be deduced from the Hilbert polynomial even if the latter exists. In good cases, the Hilbert–Poincaré series can be expressed as a rational function of its argument .
In mathematics, a Stanley–Reisner ring, or face ring, is a quotient of a polynomial algebra over a field by a square-free monomial ideal. Such ideals are described more geometrically in terms of finite simplicial complexes. The Stanley–Reisner ring construction is a basic tool within algebraic combinatorics and combinatorial commutative algebra. Its properties were investigated by Richard Stanley, Melvin Hochster, and Gerald Reisner in the early 1970s.
In ring theory, a branch of mathematics, a ring R is a polynomial identity ring if there is, for some N > 0, an element P ≠ 0 of the free algebra, Z⟨X1, X2, ..., XN⟩, over the ring of integers in N variables X1, X2, ..., XN such that
In algebra and in particular in algebraic combinatorics, a quasisymmetric function is any element in the ring of quasisymmetric functions which is in turn a subring of the formal power series ring with a countable number of variables. This ring generalizes the ring of symmetric functions. This ring can be realized as a specific limit of the rings of quasisymmetric polynomials in n variables, as n goes to infinity. This ring serves as universal structure in which relations between quasisymmetric polynomials can be expressed in a way independent of the number n of variables.
In algebra, a λ-ring or lambda ring is a commutative ring together with some operations λn on it that behave like the exterior powers of vector spaces. Many rings considered in K-theory carry a natural λ-ring structure. λ-rings also provide a powerful formalism for studying an action of the symmetric functions on the ring of polynomials, recovering and extending many classical results.