Remez inequality

Last updated

In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez ( Remez 1936 ), gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials.

Contents

The inequality

Let σ be an arbitrary fixed positive number. Define the class of polynomials πn(σ) to be those polynomials p of degree n for which

on some set of measure ≥ 2 contained in the closed interval [−1, 1+σ]. Then the Remez inequality states that

where Tn(x) is the Chebyshev polynomial of degree n, and the supremum norm is taken over the interval [−1, 1+σ].

Observe that Tn is increasing on , hence

The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary: If J  R is a finite interval, and E  J is an arbitrary measurable set, then

for any polynomial p of degree n.

Extensions: Nazarov–Turán lemma

Inequalities similar to ( ) have been proved for different classes of functions, and are known as Remez-type inequalities. One important example is Nazarov's inequality for exponential sums ( Nazarov 1993 ):

Nazarov's inequality. Let
be an exponential sum (with arbitrary λk C), and let J  R be a finite interval, E  J—an arbitrary measurable set. Then
where C > 0 is a numerical constant.

In the special case when λk are pure imaginary and integer, and the subset E is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma.

This inequality also extends to in the following way

for some A > 0 independent of p, E, and n. When

a similar inequality holds for p > 2. For p = ∞ there is an extension to multidimensional polynomials.

Proof: Applying Nazarov's lemma to leads to

thus

Now fix a set and choose such that , that is

Note that this implies:

Now

which completes the proof.

Pólya inequality

One of the corollaries of the Remez inequality is the Pólya inequality, which was proved by George Pólya ( Pólya 1928 ), and states that the Lebesgue measure of a sub-level set of a polynomial p of degree n is bounded in terms of the leading coefficient LC(p) as follows:

Related Research Articles

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.

In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.

In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of the variable's mean, or equivalently an upper bound for the probability that it lies further away. The sole restrictions on the distribution are that it be unimodal and have finite variance; here unimodal implies that it is a continuous probability distribution except at the mode, which may have a non-zero probability.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.

In quantum mechanics, notably in quantum information theory, fidelity quantifies the "closeness" between two density matrices. It expresses the probability that one state will pass a test to identify as the other. It is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.

In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.

In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2, then for every positive ,

In mathematics, a space, where is a real number, is a specific type of metric space. Intuitively, triangles in a space are "slimmer" than corresponding "model triangles" in a standard space of constant curvature . In a space, the curvature is bounded from above by . A notable special case is ; complete spaces are known as "Hadamard spaces" after the French mathematician Jacques Hadamard.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average".

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In quantum mechanics, and especially quantum information and the study of open quantum systems, the trace distanceT is a metric on the space of density matrices and gives a measure of the distinguishability between two states. It is the quantum generalization of the Kolmogorov distance for classical probability distributions.

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value. The deviation or other function of the random variable can be thought of as a secondary random variable. The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity. If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration. It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In queueing theory, a discipline within the mathematical theory of probability, a heavy traffic approximation involves the matching of a queueing model with a diffusion process under some limiting conditions on the model's parameters. The first such result was published by John Kingman, who showed that when the utilisation parameter of an M/M/1 queue is near 1, a scaled version of the queue length process can be accurately approximated by a reflected Brownian motion.

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.

References