Discrete orthogonal polynomials

Last updated

In mathematics, a sequence of discrete orthogonal polynomials is a sequence of polynomials that are pairwise orthogonal with respect to a discrete measure. Examples include the discrete Chebyshev polynomials, Charlier polynomials, Krawtchouk polynomials, Meixner polynomials, dual Hahn polynomials, Hahn polynomials, and Racah polynomials.

Contents

If the measure has finite support, then the corresponding sequence of discrete orthogonal polynomials has only a finite number of elements. The Racah polynomials give an example of this.

Definition

Consider a discrete measure on some set with weight function .

A family of orthogonal polynomials is called discrete if they are orthogonal with respect to (resp. ), i.e.,

where is the Kronecker delta. [1]

Remark

Any discrete measure is of the form

,

so one can define a weight function by .

Literature

Related Research Articles

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.

In mathematics, more precisely in measure theory, an atom is a measurable set which has positive measure and contains no set of smaller positive measures. A measure which has no atoms is called non-atomic or atomless.

In mathematics, the Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions.

In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”

Let be some measure space with -finite measure . The Poisson random measure with intensity measure is a family of random variables defined on some probability space such that

In mathematics, the Schur orthogonality relations, which were proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3).

<span class="mw-page-title-main">Discrete measure</span>

In mathematics, more precisely in measure theory, a measure on the real line is called a discrete measure if it is concentrated on an at most countable set. The support need not be a discrete set. Geometrically, a discrete measure is a collection of point masses.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

In mathematics, the Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials, introduced by Pafnuty Chebyshev in 1875 and rediscovered by Wolfgang Hahn. The Hahn class is a name for special cases of Hahn polynomials, including Hahn polynomials, Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials. Sometimes the Hahn class is taken to include limiting cases of these polynomials, in which case it also includes the classical orthogonal polynomials.

In mathematics, Charlier polynomials are a family of orthogonal polynomials introduced by Carl Charlier. They are given in terms of the generalized hypergeometric function by

In mathematics, Racah polynomials are orthogonal polynomials named after Giulio Racah, as their orthogonality relations are equivalent to his orthogonality relations for Racah coefficients.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and

In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product.

In mathematics, Sobolev orthogonal polynomials are orthogonal polynomials with respect to a Sobolev inner product, i.e. an inner product with derivatives.

References

  1. Arvesú, J.; Coussement, J.; Van Assche, Walter (2003). "Some discrete multiple orthogonal polynomials". Journal of Computational and Applied Mathematics. 153: 19–45.