The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:
The Chebyshev polynomials of the first kind are defined by:
Similarly, the Chebyshev polynomials of the second kind are defined by:
That these expressions define polynomials in may not be obvious at first sight but follows by rewriting and using de Moivre's formula or by using the angle sum formulas for and repeatedly. For example, the double angle formulas, which follow directly from the angle sum formulas, may be used to obtain and , which are respectively a polynomial in and a polynomial in multiplied by . Hence and .
An important and convenient property of the Tn(x) is that they are orthogonal with respect to the following inner product: and Un(x) are orthogonal with respect to another, analogous inner product, given below.
The Chebyshev polynomials Tn are polynomials with the largest possible leading coefficient whose absolute value on the interval [−1, 1] is bounded by 1. They are also the "extremal" polynomials for many other properties. [1]
In 1952, Cornelius Lanczos showed that the Chebyshev polynomials are important in approximation theory for the solution of linear systems; [2] the roots of Tn(x), which are also called Chebyshev nodes , are used as matching points for optimizing polynomial interpolation. The resulting interpolation polynomial minimizes the problem of Runge's phenomenon and provides an approximation that is close to the best polynomial approximation to a continuous function under the maximum norm, also called the "minimax" criterion. This approximation leads directly to the method of Clenshaw–Curtis quadrature.
These polynomials were named after Pafnuty Chebyshev. [3] The letter T is used because of the alternative transliterations of the name Chebyshev as Tchebycheff, Tchebyshev (French) or Tschebyschow (German).
The Chebyshev polynomials of the first kind are obtained from the recurrence relation:
The recurrence also allows to represent them explicitly as the determinant of a tridiagonal matrix of size :
The ordinary generating function for Tn is: There are several other generating functions for the Chebyshev polynomials; the exponential generating function is:
The generating function relevant for 2-dimensional potential theory and multipole expansion is:
The Chebyshev polynomials of the second kind are defined by the recurrence relation: Notice that the two sets of recurrence relations are identical, except for vs. . The ordinary generating function for Un is: and the exponential generating function is:
As described in the introduction, the Chebyshev polynomials of the first kind can be defined as the unique polynomials satisfying: or, in other words, as the unique polynomials satisfying: for n = 0, 1, 2, 3, ….
The polynomials of the second kind satisfy: or which is structurally quite similar to the Dirichlet kernel Dn(x): (The Dirichlet kernel, in fact, coincides with what is now known as the Chebyshev polynomial of the fourth kind.)
An equivalent way to state this is via exponentiation of a complex number: given a complex number z = a + bi with absolute value of one: Chebyshev polynomials can be defined in this form when studying trigonometric polynomials. [4]
That cos nx is an nth-degree polynomial in cos x can be seen by observing that cos nx is the real part of one side of de Moivre's formula: The real part of the other side is a polynomial in cos x and sin x, in which all powers of sin x are even and thus replaceable through the identity cos2x + sin2x = 1. By the same reasoning, sin nx is the imaginary part of the polynomial, in which all powers of sin x are odd and thus, if one factor of sin x is factored out, the remaining factors can be replaced to create a (n−1)st-degree polynomial in cos x.
Chebyshev polynomials can also be characterized by the following theorem: [5]
If is a family of monic polynomials with coefficients in a field of characteristic such that and for all and , then, up to a simple change of variables, either for all or for all .
The Chebyshev polynomials can also be defined as the solutions to the Pell equation: in a ring R[x]. [6] Thus, they can be generated by the standard technique for Pell equations of taking powers of a fundamental solution:
The Chebyshev polynomials of the first and second kinds correspond to a complementary pair of Lucas sequences Ṽn(P, Q) and Ũn(P, Q) with parameters P = 2x and Q = 1: It follows that they also satisfy a pair of mutual recurrence equations: [7]
The second of these may be rearranged using the recurrence definition for the Chebyshev polynomials of the second kind to give:
Using this formula iteratively gives the sum formula: while replacing and using the derivative formula for gives the recurrence relationship for the derivative of :
This relationship is used in the Chebyshev spectral method of solving differential equations.
Turán's inequalities for the Chebyshev polynomials are: [8]
The integral relations are [7] : 187(47)(48) [9] where integrals are considered as principal value.
Different approaches to defining Chebyshev polynomials lead to different explicit expressions. The trigonometric definition gives an explicit formula as follows: From this trigonometric form, the recurrence definition can be recovered by computing directly that the bases cases hold: and and that the product-to-sum identity holds:
Using the complex number exponentiation definition of the Chebyshev polynomial, one can derive the following expression: The two are equivalent because .
An explicit form of the Chebyshev polynomial in terms of monomials xk follows from de Moivre's formula: where Re denotes the real part of a complex number. Expanding the formula, one gets: The real part of the expression is obtained from summands corresponding to even indices. Noting and , one gets the explicit formula: which in turn means that: This can be written as a 2F1 hypergeometric function: with inverse: [10] [11]
where the prime at the summation symbol indicates that the contribution of j = 0 needs to be halved if it appears.
A related expression for Tn as a sum of monomials with binomial coefficients and powers of two is
Similarly, Un can be expressed in terms of hypergeometric functions:
That is, Chebyshev polynomials of even order have even symmetry and therefore contain only even powers of x. Chebyshev polynomials of odd order have odd symmetry and therefore contain only odd powers of x.
A Chebyshev polynomial of either kind with degree n has n different simple roots, called Chebyshev roots, in the interval [−1, 1]. The roots of the Chebyshev polynomial of the first kind are sometimes called Chebyshev nodes because they are used as nodes in polynomial interpolation. Using the trigonometric definition and the fact that: one can show that the roots of Tn are: Similarly, the roots of Un are: The extrema of Tn on the interval −1 ≤ x ≤ 1 are located at:
One unique property of the Chebyshev polynomials of the first kind is that on the interval −1 ≤ x ≤ 1 all of the extrema have values that are either −1 or 1. Thus these polynomials have only two finite critical values, the defining property of Shabat polynomials. Both the first and second kinds of Chebyshev polynomial have extrema at the endpoints, given by:
The extrema of on the interval where are located at values of . They are , or where , , and , i.e., and are relatively prime numbers.
Specifically, [12] [13] when is even:
When is odd:
This result has been generalized to solutions of , [13] and to and for Chebyshev polynomials of the third and fourth kinds, respectively. [14]
The derivatives of the polynomials can be less than straightforward. By differentiating the polynomials in their trigonometric forms, it can be shown that:
The last two formulas can be numerically troublesome due to the division by zero (0/0 indeterminate form, specifically) at x = 1 and x = −1. By L'Hôpital's rule:
More generally, which is of great use in the numerical solution of eigenvalue problems.
Also, we have: where the prime at the summation symbols means that the term contributed by k = 0 is to be halved, if it appears.
Concerning integration, the first derivative of the Tn implies that: and the recurrence relation for the first kind polynomials involving derivatives establishes that for n ≥ 2:
The last formula can be further manipulated to express the integral of Tn as a function of Chebyshev polynomials of the first kind only:
Furthermore, we have:
The Chebyshev polynomials of the first kind satisfy the relation: which is easily proved from the product-to-sum formula for the cosine: For n = 1 this results in the already known recurrence formula, just arranged differently, and with n = 2 it forms the recurrence relation for all even or all odd indexed Chebyshev polynomials (depending on the parity of the lowest m) which implies the evenness or oddness of these polynomials. Three more useful formulas for evaluating Chebyshev polynomials can be concluded from this product expansion:
The polynomials of the second kind satisfy the similar relation: (with the definition U−1 ≡ 0 by convention ). They also satisfy: for m ≥ n. For n = 2 this recurrence reduces to: which establishes the evenness or oddness of the even or odd indexed Chebyshev polynomials of the second kind depending on whether m starts with 2 or 3.
The trigonometric definitions of Tn and Un imply the composition or nesting properties: [15] For Tmn the order of composition may be reversed, making the family of polynomial functions Tn a commutative semigroup under composition.
Since Tm(x) is divisible by x if m is odd, it follows that Tmn(x) is divisible by Tn(x) if m is odd. Furthermore, Umn−1(x) is divisible by Un−1(x), and in the case that m is even, divisible by Tn(x)Un−1(x).
Both Tn and Un form a sequence of orthogonal polynomials. The polynomials of the first kind Tn are orthogonal with respect to the weight: on the interval [−1, 1], i.e. we have:
This can be proven by letting x = cos θ and using the defining identity Tn(cos θ) = cos(nθ).
Similarly, the polynomials of the second kind Un are orthogonal with respect to the weight: on the interval [−1, 1], i.e. we have:
(The measure √1 − x2 dx is, to within a normalizing constant, the Wigner semicircle distribution.)
These orthogonality properties follow from the fact that the Chebyshev polynomials solve the Chebyshev differential equations: which are Sturm–Liouville differential equations. It is a general feature of such differential equations that there is a distinguished orthonormal set of solutions. (Another way to define the Chebyshev polynomials is as the solutions to those equations.)
The Tn also satisfy a discrete orthogonality condition: where N is any integer greater than max(i, j), [9] and the xk are the N Chebyshev nodes (see above) of TN (x):
For the polynomials of the second kind and any integer N > i + j with the same Chebyshev nodes xk, there are similar sums: and without the weight function:
For any integer N > i + j, based on the N zeros of UN (x): one can get the sum: and again without the weight function:
For any given n ≥ 1, among the polynomials of degree n with leading coefficient 1 (monic polynomials): is the one of which the maximal absolute value on the interval [−1, 1] is minimal.
This maximal absolute value is: and |f(x)| reaches this maximum exactly n + 1 times at:
Let's assume that wn(x) is a polynomial of degree n with leading coefficient 1 with maximal absolute value on the interval [−1, 1] less than 1 / 2n − 1.
Define
Because at extreme points of Tn we have
From the intermediate value theorem, fn(x) has at least n roots. However, this is impossible, as fn(x) is a polynomial of degree n − 1, so the fundamental theorem of algebra implies it has at most n − 1 roots.
By the equioscillation theorem, among all the polynomials of degree ≤ n, the polynomial f minimizes ‖ f ‖∞ on [−1, 1] if and only if there are n + 2 points −1 ≤ x0 < x1 < ⋯ < xn + 1 ≤ 1 such that | f(xi)| = ‖ f ‖∞.
Of course, the null polynomial on the interval [−1, 1] can be approximated by itself and minimizes the ∞-norm.
Above, however, | f | reaches its maximum only n + 1 times because we are searching for the best polynomial of degree n ≥ 1 (therefore the theorem evoked previously cannot be used).
The Chebyshev polynomials are a special case of the ultraspherical or Gegenbauer polynomials , which themselves are a special case of the Jacobi polynomials :
Chebyshev polynomials are also a special case of Dickson polynomials: In particular, when , they are related by and .
The curves given by y = Tn(x), or equivalently, by the parametric equations y = Tn(cos θ) = cos nθ, x = cos θ, are a special case of Lissajous curves with frequency ratio equal to n.
Similar to the formula: we have the analogous formula:
For x ≠ 0: and: which follows from the fact that this holds by definition for x = eiθ.
There are relations between Legendre polynomials and Chebyshev polynomials
These identities can be proven using generating functions and discrete convolution
The first few Chebyshev polynomials of the first kind are OEIS: A028297
The first few Chebyshev polynomials of the second kind are OEIS: A053117
In the appropriate Sobolev space, the set of Chebyshev polynomials form an orthonormal basis, so that a function in the same space can, on −1 ≤ x ≤ 1, be expressed via the expansion: [16]
Furthermore, as mentioned previously, the Chebyshev polynomials form an orthogonal basis which (among other things) implies that the coefficients an can be determined easily through the application of an inner product. This sum is called a Chebyshev series or a Chebyshev expansion.
Since a Chebyshev series is related to a Fourier cosine series through a change of variables, all of the theorems, identities, etc. that apply to Fourier series have a Chebyshev counterpart. [16] These attributes include:
The abundance of the theorems and identities inherited from Fourier series make the Chebyshev polynomials important tools in numeric analysis; for example they are the most popular general purpose basis functions used in the spectral method, [16] often in favor of trigonometric series due to generally faster convergence for continuous functions (Gibbs' phenomenon is still a problem).
Consider the Chebyshev expansion of log(1 + x). One can express:
One can find the coefficients an either through the application of an inner product or by the discrete orthogonality condition. For the inner product: which gives:
Alternatively, when the inner product of the function being approximated cannot be evaluated, the discrete orthogonality condition gives an often useful result for approximate coefficients: where δij is the Kronecker delta function and the xk are the N Gauss–Chebyshev zeros of TN (x): For any N, these approximate coefficients provide an exact approximation to the function at xk with a controlled error between those points. The exact coefficients are obtained with N = ∞, thus representing the function exactly at all points in [−1,1]. The rate of convergence depends on the function and its smoothness.
This allows us to compute the approximate coefficients an very efficiently through the discrete cosine transform:
To provide another example:
The partial sums of: are very useful in the approximation of various functions and in the solution of differential equations (see spectral method). Two common methods for determining the coefficients an are through the use of the inner product as in Galerkin's method and through the use of collocation which is related to interpolation.
As an interpolant, the N coefficients of the (N − 1)st partial sum are usually obtained on the Chebyshev–Gauss–Lobatto [17] points (or Lobatto grid), which results in minimum error and avoids Runge's phenomenon associated with a uniform grid. This collection of points corresponds to the extrema of the highest order polynomial in the sum, plus the endpoints and is given by:
An arbitrary polynomial of degree N can be written in terms of the Chebyshev polynomials of the first kind. [9] Such a polynomial p(x) is of the form:
Polynomials in Chebyshev form can be evaluated using the Clenshaw algorithm.
Polynomials denoted and closely related to Chebyshev polynomials are sometimes used. They are defined by: [18] and satisfy: A. F. Horadam called the polynomials Vieta–Lucas polynomials and denoted them . He called the polynomials Vieta–Fibonacci polynomials and denoted them . [19] Lists of both sets of polynomials are given in Viète's Opera Mathematica, Chapter IX, Theorems VI and VII. [20] The Vieta–Lucas and Vieta–Fibonacci polynomials of real argument are, up to a power of and a shift of index in the case of the latter, equal to Lucas and Fibonacci polynomials Ln and Fn of imaginary argument.
Shifted Chebyshev polynomials of the first and second kinds are related to the Chebyshev polynomials by: [18]
When the argument of the Chebyshev polynomial satisfies 2x − 1 ∈ [−1, 1] the argument of the shifted Chebyshev polynomial satisfies x ∈ [0, 1]. Similarly, one can define shifted polynomials for generic intervals [a, b].
Around 1990 the terms "third-kind" and "fourth-kind" came into use in connection with Chebyshev polynomials, although the polynomials denoted by these terms had an earlier development under the name airfoil polynomials. According to J. C. Mason and G. H. Elliott, the terminology "third-kind" and "fourth-kind" is due to Walter Gautschi, "in consultation with colleagues in the field of orthogonal polynomials." [21] The Chebyshev polynomials of the third kind are defined as: and the Chebyshev polynomials of the fourth kind are defined as: where . [21] [22] In the airfoil literature and are denoted and . The polynomial families , , , and are orthogonal with respect to the weights: and are proportional to Jacobi polynomials with: [22]
All four families satisfy the recurrence with , where , , , or , but they differ according to whether equals , , , or . [21]
Some applications rely on Chebyshev polynomials but may be unable to accommodate the lack of a root at zero, which rules out the use of standard Chebyshev polynomials for these kinds of applications. Even order Chebyshev filter designs using equally terminated passive networks are an example of this. [23] However, even order Chebyshev polynomials may be modified to move the lowest roots down to zero while still maintaining the desirable Chebyshev equi-ripple effect. Such modified polynomials contain two roots at zero, and may be referred to as even order modified Chebyshev polynomials. Even order modified Chebyshev polynomials may be created from the Chebyshev nodes in the same manner as standard Chebyshev polynomials.
where
In the case of even order modified Chebyshev polynomials, the even order modified Chebyshev nodes are used to construct the even order modified Chebyshev polynomials.
where
For example, the 4th order Chebyshev polynomial from the example above is , which by inspection contains no roots of zero. Creating the polynomial from the even order modified Chebyshev nodes creates a 4th order even order modified Chebyshev polynomial of , which by inspection contains two roots at zero, and may be used in applications requiring roots at zero.
In mathematics, the arithmetic–geometric mean of two positive real numbers x and y is the mutual limit of a sequence of arithmetic means and a sequence of geometric means. The arithmetic–geometric mean is used in fast algorithms for exponential, trigonometric functions, and other special functions, as well as some mathematical constants, in particular, computing π.
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a given point in space is specified by three real numbers: the radial distancer along the radial line connecting the point to the fixed point of origin; the polar angleθ between the radial line and a given polar axis; and the azimuthal angleφ as the angle of rotation of the radial line around the polar axis. (See graphic re the "physics convention".) Once the radius is fixed, the three coordinates (r, θ, φ), known as a 3-tuple, provide a coordinate system on a sphere, typically called the spherical polar coordinates. The plane passing through the origin and perpendicular to the polar axis (where the polar angle is a right angle) is called the reference plane (sometimes fundamental plane).
In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a wide number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral
Chebyshev filters are analog or digital filters that have a steeper roll-off than Butterworth filters, and have either passband ripple or stopband ripple. Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the operating frequency range of the filter, but they achieve this with ripples in the passband. This type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Type I Chebyshev filters are usually referred to as "Chebyshev filters", while type II filters are usually called "inverse Chebyshev filters". Because of the passband ripple inherent in Chebyshev filters, filters with a smoother response in the passband but a more irregular response in the stopband are preferred for certain applications.
In mathematics, the Clausen function, introduced by Thomas Clausen, is a transcendental, special function of a single variable. It can variously be expressed in the form of a definite integral, a trigonometric series, and various other forms. It is intimately connected with the polylogarithm, inverse tangent integral, polygamma function, Riemann zeta function, Dirichlet eta function, and Dirichlet beta function.
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory.
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi. Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.
In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials.
The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions.
In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle, and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted as and .
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations.
There are several equivalent ways for defining trigonometric functions, and the proofs of the trigonometric identities between them depend on the chosen definition. The oldest and most elementary definitions are based on the geometry of right triangles and the ratio between their sides. The proofs given in this article use these definitions, and thus apply to non-negative angles not greater than a right angle. For greater and negative angles, see Trigonometric functions.
The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). It was introduced in 1927 by Eugene Wigner, and plays a fundamental role in the quantum mechanical theory of angular momentum. The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The letter D stands for Darstellung, which means "representation" in German.
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions . There are two kinds: the regular solid harmonics, which are well-defined at the origin and the irregular solid harmonics, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.