Orthogonal polynomials

Last updated

In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product.

Contents

The most widely used orthogonal polynomials are the classical orthogonal polynomials, consisting of the Hermite polynomials, the Laguerre polynomials and the Jacobi polynomials. The Gegenbauer polynomials form the most important class of Jacobi polynomials; they include the Chebyshev polynomials, and the Legendre polynomials as special cases.

The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by P. L. Chebyshev and was pursued by A. A. Markov and T. J. Stieltjes. They appear in a wide variety of fields: numerical analysis (quadrature rules), probability theory, representation theory (of Lie groups, quantum groups, and related objects), enumerative combinatorics, algebraic combinatorics, mathematical physics (the theory of random matrices, integrable systems, etc.), and number theory. Some of the mathematicians who have worked on orthogonal polynomials include Gábor Szegő, Sergei Bernstein, Naum Akhiezer, Arthur Erdélyi, Yakov Geronimus, Wolfgang Hahn, Theodore Seio Chihara, Mourad Ismail, Waleed Al-Salam, Richard Askey, and Rehuel Lobatto.

Definition for 1-variable case for a real measure

Given any non-decreasing function α on the real numbers, we can define the Lebesgue–Stieltjes integral

of a function f. If this integral is finite for all polynomials f, we can define an inner product on pairs of polynomials f and g by

This operation is a positive semidefinite inner product on the vector space of all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion of orthogonality in the usual way, namely that two polynomials are orthogonal if their inner product is zero.

Then the sequence (Pn)
n=0
of orthogonal polynomials is defined by the relations

In other words, the sequence is obtained from the sequence of monomials 1, x, x2, … by the Gram–Schmidt process with respect to this inner product.

Usually the sequence is required to be orthonormal, namely,

however, other normalisations are sometimes used.

Absolutely continuous case

Sometimes we have

where

is a non-negative function with support on some interval [x1, x2] in the real line (where x1 = −∞ and x2 = ∞ are allowed). Such a W is called a weight function. [1] Then the inner product is given by

However, there are many examples of orthogonal polynomials where the measure (x) has points with non-zero measure where the function α is discontinuous, so cannot be given by a weight function W as above.

Examples of orthogonal polynomials

The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes:

Discrete orthogonal polynomials are orthogonal with respect to some discrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. The Racah polynomials are examples of discrete orthogonal polynomials, and include as special cases the Hahn polynomials and dual Hahn polynomials, which in turn include as special cases the Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials.

Meixner classified all the orthogonal Sheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to the NEF-QVFs and are martingale polynomials for certain Lévy processes.

Sieved orthogonal polynomials, such as the sieved ultraspherical polynomials, sieved Jacobi polynomials, and sieved Pollaczek polynomials, have modified recurrence relations.

One can also consider orthogonal polynomials for some curve in the complex plane. The most important case (other than real intervals) is when the curve is the unit circle, giving orthogonal polynomials on the unit circle, such as the Rogers–Szegő polynomials.

There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example, Zernike polynomials are orthogonal on the unit disk.

The advantage of orthogonality between different orders of Hermite polynomials is applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice. [2]

Properties

Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties.

Relation to moments

The orthogonal polynomials Pn can be expressed in terms of the moments

as follows:

where the constants cn are arbitrary (depend on the normalization of Pn).

This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality with prescribes that must have the form

which can be seen to be consistent with the previously given expression with the determinant.

Recurrence relation

The polynomials Pn satisfy a recurrence relation of the form

where An is not 0. The converse is also true; see Favard's theorem.

Christoffel–Darboux formula

Zeros

If the measure dα is supported on an interval [a, b], all the zeros of Pn lie in [a, b]. Moreover, the zeros have the following interlacing property: if m < n, there is a zero of Pn between any two zeros of Pm. Electrostatic interpretations of the zeros can be given.[ citation needed ]

Combinatorial interpretation

From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials. [3]

Other types of orthogonal polynomials

Multivariate orthogonal polynomials

The Macdonald polynomials are orthogonal polynomials in several variables, depending on the choice of an affine root system. They include many other families of multivariable orthogonal polynomials as special cases, including the Jack polynomials, the Hall–Littlewood polynomials, the Heckman–Opdam polynomials, and the Koornwinder polynomials. The Askey–Wilson polynomials are the special case of Macdonald polynomials for a certain non-reduced root system of rank 1.

Multiple orthogonal polynomials

Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures.

Sobolev orthogonal polynomials

These are orthogonal polynomials with respect to a Sobolev inner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials.

Orthogonal polynomials with matrices

Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix.

There are two popular examples: either the coefficients are matrices or :

Quantum polynomials

Quantum polynomials or q-polynomials are the q-analogs of orthogonal polynomials.

See also

Related Research Articles

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

<span class="mw-page-title-main">Wave function</span> Mathematical description of the quantum state of a system

In quantum physics, a wave function is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters ψ and Ψ. Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function ψ and calculate the statistical distributions for measurable quantities.

In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.

In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.

In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:

In mathematics, Rodrigues' formula generates the Legendre polynomials. It was independently introduced by Olinde Rodrigues, Sir James Ivory and Carl Gustav Jacobi. The name "Rodrigues formula" was introduced by Heine in 1878, after Hermite pointed out in 1865 that Rodrigues was the first to discover it. The term is also used to describe similar formulas for other orthogonal polynomials. Askey (2005) describes the history of the Rodrigues formula in detail.

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In probability theory, the Wick product is a particular way of defining an adjusted product of a set of random variables. In the lowest order product the adjustment corresponds to subtracting off the mean value, to leave a result whose mean is zero. For the higher order products the adjustment involves subtracting off lower order (ordinary) products of the random variables, in a symmetric way, again leaving a result whose mean is zero. The Wick product is a polynomial function of the random variables, their expected values, and expected values of their products.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

In mathematics, the secondary measure associated with a measure of positive density ρ when there is one, is a measure of positive density μ, turning the secondary polynomials associated with the orthogonal polynomials for ρ into an orthogonal system.

In mathematics, the Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials, introduced by Pafnuty Chebyshev in 1875 and rediscovered by Wolfgang Hahn. The Hahn class is a name for special cases of Hahn polynomials, including Hahn polynomials, Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials. Sometimes the Hahn class is taken to include limiting cases of these polynomials, in which case it also includes the classical orthogonal polynomials.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In mathematics, the Christoffel–Darboux formula or Christoffel–Darboux theorem is an identity for a sequence of orthogonal polynomials, introduced by Elwin Bruno Christoffel and Jean Gaston Darboux. It states that

A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.

In mathematics, the Askey scheme is a way of organizing orthogonal polynomials of hypergeometric or basic hypergeometric type into a hierarchy. For the classical orthogonal polynomials discussed in Andrews & Askey (1985), the Askey scheme was first drawn by Labelle (1985) and by Askey and Wilson, and has since been extended by Koekoek & Swarttouw (1998) and Koekoek, Lesky & Swarttouw (2010) to cover basic orthogonal polynomials.

In mathematics, the Romanovski polynomials are one of three finite subsets of real orthogonal polynomials discovered by Vsevolod Romanovsky within the context of probability distribution functions in statistics. They form an orthogonal subset of a more general family of little-known Routh polynomials introduced by Edward John Routh in 1884. The term Romanovski polynomials was put forward by Raposo, with reference to the so-called 'pseudo-Jacobi polynomials in Lesky's classification scheme. It seems more consistent to refer to them as Romanovski–Routh polynomials, by analogy with the terms Romanovski–Bessel and Romanovski–Jacobi used by Lesky for two other sets of orthogonal polynomials.

<span class="mw-page-title-main">Orthogonality (mathematics)</span>

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

References

  1. Demo of orthonormal polynomials obtained for different weight functions
  2. Catak, E.; Durak-Ata, L. (2017). "An efficient transceiver design for superimposed waveforms with orthogonal polynomials". 2017 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). pp. 1–5. doi:10.1109/BlackSeaCom.2017.8277657. ISBN   978-1-5090-5049-9. S2CID   22592277.
  3. Viennot, Xavier (2017). "The Art of Bijective Combinatorics, Part IV, Combinatorial theory of orthogonal polynomials and continued fractions". Chennai: IMSc.