Orthogonal polynomials

Last updated

In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product.

Contents

The most widely used orthogonal polynomials are the classical orthogonal polynomials, consisting of the Hermite polynomials, the Laguerre polynomials and the Jacobi polynomials. The Gegenbauer polynomials form the most important class of Jacobi polynomials; they include the Chebyshev polynomials, and the Legendre polynomials as special cases.

The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by P. L. Chebyshev and was pursued by A. A. Markov and T. J. Stieltjes. They appear in a wide variety of fields: numerical analysis (quadrature rules), probability theory, representation theory (of Lie groups, quantum groups, and related objects), enumerative combinatorics, algebraic combinatorics, mathematical physics (the theory of random matrices, integrable systems, etc.), and number theory. Some of the mathematicians who have worked on orthogonal polynomials include Gábor Szegő, Sergei Bernstein, Naum Akhiezer, Arthur Erdélyi, Yakov Geronimus, Wolfgang Hahn, Theodore Seio Chihara, Mourad Ismail, Waleed Al-Salam, Richard Askey, and Rehuel Lobatto.

Definition for 1-variable case for a real measure

Given any non-decreasing function α on the real numbers, we can define the Lebesgue–Stieltjes integral

of a function f. If this integral is finite for all polynomials f, we can define an inner product on pairs of polynomials f and g by

This operation is a positive semidefinite inner product on the vector space of all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion of orthogonality in the usual way, namely that two polynomials are orthogonal if their inner product is zero.

Then the sequence (Pn)
n=0
of orthogonal polynomials is defined by the relations

In other words, the sequence is obtained from the sequence of monomials 1, x, x2, … by the Gram–Schmidt process with respect to this inner product.

Usually the sequence is required to be orthonormal, namely,

however, other normalisations are sometimes used.

Absolutely continuous case

Sometimes we have

where

is a non-negative function with support on some interval [x1, x2] in the real line (where x1 = −∞ and x2 = ∞ are allowed). Such a W is called a weight function. [1] Then the inner product is given by

However, there are many examples of orthogonal polynomials where the measure (x) has points with non-zero measure where the function α is discontinuous, so cannot be given by a weight function W as above.

Examples of orthogonal polynomials

The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes:

Discrete orthogonal polynomials are orthogonal with respect to some discrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. The Racah polynomials are examples of discrete orthogonal polynomials, and include as special cases the Hahn polynomials and dual Hahn polynomials, which in turn include as special cases the Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials.

Meixner classified all the orthogonal Sheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to the NEF-QVFs and are martingale polynomials for certain Lévy processes.

Sieved orthogonal polynomials, such as the sieved ultraspherical polynomials, sieved Jacobi polynomials, and sieved Pollaczek polynomials, have modified recurrence relations.

One can also consider orthogonal polynomials for some curve in the complex plane. The most important case (other than real intervals) is when the curve is the unit circle, giving orthogonal polynomials on the unit circle, such as the Rogers–Szegő polynomials.

There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example, Zernike polynomials are orthogonal on the unit disk.

The advantage of orthogonality between different orders of Hermite polynomials is applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice. [2]

Properties

Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties.

Relation to moments

The orthogonal polynomials Pn can be expressed in terms of the moments

as follows:

where the constants cn are arbitrary (depend on the normalization of Pn).

This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality with prescribes that must have the form

which can be seen to be consistent with the previously given expression with the determinant.

Recurrence relation

The polynomials Pn satisfy a recurrence relation of the form

where An is not 0. The converse is also true; see Favard's theorem.

Christoffel–Darboux formula

Zeros

If the measure dα is supported on an interval [a, b], all the zeros of Pn lie in [a, b]. Moreover, the zeros have the following interlacing property: if m < n, there is a zero of Pn between any two zeros of Pm. Electrostatic interpretations of the zeros can be given.[ citation needed ]

Combinatorial interpretation

From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials. [3]

Other types of orthogonal polynomials

Multivariate orthogonal polynomials

The Macdonald polynomials are orthogonal polynomials in several variables, depending on the choice of an affine root system. They include many other families of multivariable orthogonal polynomials as special cases, including the Jack polynomials, the Hall–Littlewood polynomials, the Heckman–Opdam polynomials, and the Koornwinder polynomials. The Askey–Wilson polynomials are the special case of Macdonald polynomials for a certain non-reduced root system of rank 1.

Multiple orthogonal polynomials

Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures.

Sobolev orthogonal polynomials

These are orthogonal polynomials with respect to a Sobolev inner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials.

Orthogonal polynomials with matrices

Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix.

See also

Related Research Articles

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v1, ..., vk} for kn and generates an orthogonal set S′ = {u1, ..., uk} that spans the same k-dimensional subspace of Rn as S.

<span class="mw-page-title-main">Wave function</span> Mathematical description of the quantum state of a system

In quantum physics, a wave function is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ and Ψ.

In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.

In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.

In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:

In mathematics, Rodrigues' formula is a formula for the Legendre polynomials independently introduced by Olinde Rodrigues (1816), Sir James Ivory (1824) and Carl Gustav Jacobi (1827). The name "Rodrigues formula" was introduced by Heine in 1878, after Hermite pointed out in 1865 that Rodrigues was the first to discover it. The term is also used to describe similar formulas for other orthogonal polynomials. Askey (2005) describes the history of the Rodrigues formula in detail.

In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In probability theory, the Wick product is a particular way of defining an adjusted product of a set of random variables. In the lowest order product the adjustment corresponds to subtracting off the mean value, to leave a result whose mean is zero. For the higher order products the adjustment involves subtracting off lower order (ordinary) products of the random variables, in a symmetric way, again leaving a result whose mean is zero. The Wick product is a polynomial function of the random variables, their expected values, and expected values of their products.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

In mathematics, the secondary measure associated with a measure of positive density ρ when there is one, is a measure of positive density μ, turning the secondary polynomials associated with the orthogonal polynomials for ρ into an orthogonal system.

Polynomial chaos (PC), also called polynomial chaos expansion (PCE) and Wiener chaos expansion, is a method for representing a random variable in terms of a polynomial function of other random variables. The polynomials are chosen to be orthogonal with respect to the joint probability distribution of these random variables. PCE can be used, e.g., to determine the evolution of uncertainty in a dynamical system when there is probabilistic uncertainty in the system parameters. Note that despite its name, PCE has no immediate connections to chaos theory.

In mathematics, the Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials, introduced by Pafnuty Chebyshev in 1875 and rediscovered by Wolfgang Hahn. The Hahn class is a name for special cases of Hahn polynomials, including Hahn polynomials, Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials. Sometimes the Hahn class is taken to include limiting cases of these polynomials, in which case it also includes the classical orthogonal polynomials.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that defines a distance function for which the space is a complete metric space.

In mathematics, the Christoffel–Darboux theorem is an identity for a sequence of orthogonal polynomials, introduced by Elwin Bruno Christoffel (1858) and Jean Gaston Darboux (1878). It states that

In mathematics, the Askey scheme is a way of organizing orthogonal polynomials of hypergeometric or basic hypergeometric type into a hierarchy. For the classical orthogonal polynomials discussed in Andrews & Askey (1985), the Askey scheme was first drawn by Labelle (1985) and by Askey and Wilson (1985), and has since been extended by Koekoek & Swarttouw (1998) and Koekoek, Lesky & Swarttouw (2010) to cover basic orthogonal polynomials.

In mathematics, the continuous q-Jacobi polynomialsP(α,β)
n
(x|q), introduced by Askey & Wilson (1985), are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.

<span class="mw-page-title-main">Orthogonality (mathematics)</span>

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

References

  1. Demo of orthonormal polynomials obtained for different weight functions
  2. Catak, E.; Durak-Ata, L. (2017). "An efficient transceiver design for superimposed waveforms with orthogonal polynomials". 2017 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). pp. 1–5. doi:10.1109/BlackSeaCom.2017.8277657. ISBN   978-1-5090-5049-9. S2CID   22592277.
  3. Viennot, Xavier (2017). "The Art of Bijective Combinatorics, Part IV, Combinatorial theory of orthogonal polynomials and continued fractions". Chennai: IMSc.