Equioscillation theorem

Last updated

In mathematics, the equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev. [1]

Contents

Statement

Let be a continuous function from to . Among all the polynomials of degree , the polynomial minimizes the uniform norm of the difference if and only if there are points such that where is either -1 or +1. [1] [2]

Variants

The equioscillation theorem is also valid when polynomials are replaced by rational functions: among all rational functions whose numerator has degree and denominator has degree , the rational function , with and being relatively prime polynomials of degree and , minimizes the uniform norm of the difference if and only if there are points such that where is either -1 or +1. [1]

Algorithms

Several minimax approximation algorithms are available, the most common being the Remez algorithm.

Related Research Articles

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for identically distributed independent samples, the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be k or more standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematics, the Radon–Nikodym theorem is a result in measure theory that expresses the relationship between two measures defined on the same measurable space. A measure is a set function that assigns a consistent magnitude to the measurable subsets of a measurable space. Examples of a measure include area and volume, where the subsets are sets of points; or the probability of an event, which is a subset of possible outcomes within a wider probability space.

<span class="mw-page-title-main">Bernstein polynomial</span> Type of polynomial used in Numerical Analysis

In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein.

In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the L1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

<span class="mw-page-title-main">Approximation theory</span> Theory of getting acceptably close inexact mathematical calculations

In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on the application.

<span class="mw-page-title-main">Moment problem</span> Trying to map moments to a measure that generates them

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure μ to the sequences of moments

In mathematics, ergodic flows occur in geometry, through the geodesic and horocycle flows of closed hyperbolic surfaces. Both of these examples have been understood in terms of the theory of unitary representations of locally compact groups: if Γ is the fundamental group of a closed surface, regarded as a discrete subgroup of the Möbius group G = PSL(2,R), then the geodesic and horocycle flow can be identified with the natural actions of the subgroups A of real positive diagonal matrices and N of lower unitriangular matrices on the unit tangent bundle G / Γ. The Ambrose-Kakutani theorem expresses every ergodic flow as the flow built from an invertible ergodic transformation on a measure space using a ceiling function. In the case of geodesic flow, the ergodic transformation can be understood in terms of symbolic dynamics; and in terms of the ergodic actions of Γ on the boundary S1 = G / AN and G / A = S1 × S1 \ diag S1. Ergodic flows also arise naturally as invariants in the classification of von Neumann algebras: the flow of weights for a factor of type III0 is an ergodic flow on a measure space.

Noncentral <i>t</i>-distribution Probability distribution

The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data.

The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L sense. It is sometimes referred to as Remes algorithm or Reme algorithm.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology.

In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average".

In mathematics, Zolotarev polynomials are polynomials used in approximation theory. They are sometimes used as an alternative to the Chebyshev polynomials where accuracy of approximation near the origin is of less importance. Zolotarev polynomials differ from the Chebyshev polynomials in that two of the coefficients are fixed in advance rather than allowed to take on any value. The Chebyshev polynomials of the first kind are a special case of Zolotarev polynomials. These polynomials were introduced by Russian mathematician Yegor Ivanovich Zolotarev in 1868.

References

  1. 1 2 3 Golomb, Michael (1962). Lectures on Theory of Approximation.
  2. "Notes on how to prove Chebyshev's equioscillation theorem" (PDF). Archived from the original (PDF) on 2 July 2011. Retrieved 2022-04-22.