Helly's selection theorem

Last updated

In mathematics, Helly's selection theorem (also called the Helly selection principle) states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence. In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions. It is named for the Austrian mathematician Eduard Helly. A more general version of the theorem asserts compactness of the space BVloc of functions locally of bounded total variation that are uniformly bounded at a point.

Contents

The theorem has applications throughout mathematical analysis. In probability theory, the result implies compactness of a tight family of measures.

Statement of the theorem

Let (fn)n  N be a sequence of increasing functions mapping a real interval I into the real line R, and suppose that it is uniformly bounded: there are a,b  R such that a  fn  b for every n  N. Then the sequence (fn)n  N admits a pointwise convergent subsequence.

Proof

Step 1. An increasing function f on an interval I has at most countably many points of discontinuity.

Let , i.e. the set of discontinuities, then since f is increasing, any x in Asatisfies, where ,, hence by discontinuity, . Since the set of rational numbers is dense in R, is non-empty. Thus the axiom of choice indicates that there is a mapping s from A to Q.

It is sufficient to show that s is injective, which implies that A has a non-larger cardinity than Q, which is countable. Suppose x1,x2A, x1<x2, then , by the construction of s, we have s(x1)<s(x2). Thus s is injective.

Step 2. Inductive Construction of a subsequence converging at discontinuities and rationals.

Let , i.e. the discontinuities of fn,, then A is countable, and it can be denoted as {an: nN}.

By the uniform boundedness of (fn)n  N and B-W theorem, there is a subsequence (f(1)n)n  N such that (f(1)n(a1))n  N converges. Suppose (f(k)n)n  N has been chosen such that (f(k)n(ai))n  N converges for i=1,...,k, then by uniform boundedness, there is a subsequence (f(k+1)n)n  N of (f(k)n)n  N, such that (f(k+1)n(ak+1))n  N converges, thus (f(k+1)n)n  N converges for i=1,...,k+1.

Let , then gk is a subsequence of fn that converges pointwise in A.

Step 3. gk converges in I except possibly in an at most countable set.

Let , then , hk(a)=gk(a) for aA, hk is increasing, let , then h is increasing, since supremes and limits of increasing functions are increasing, and for aA by Step 2. By Step 1, h has at most countably many discontinuities.

We will show that gk converges at all continuities of h. Let x be a continuity of h, q,r∈ A, q<x<r, then ,hence

Thus,

Since h is continuous at x, by taking the limits , we have , thus

Step 4. Choosing a subsequence of gk that converges pointwise in I

This can be done with a diagonal process similar to Step 2.


With the above steps we have constructed a subsequence of (fn)n  N that converges pointwise in I.

Generalisation to BVloc

Let U be an open subset of the real line and let fn : U  R, n  N, be a sequence of functions. Suppose that (fn) has uniformly bounded total variation on any W that is compactly embedded in U. That is, for all sets W  U with compact closure   U,

where the derivative is taken in the sense of tempered distributions.

Then, there exists a subsequence fnk, k  N, of fn and a function f : U  R, locally of bounded variation, such that

[1] :132
[1] :122

Further generalizations

There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces, is due to Barbu and Precupanu:

Let X be a reflexive, separable Hilbert space and let E be a closed, convex subset of X. Let Δ : X  [0, +∞) be positive-definite and homogeneous of degree one. Suppose that zn is a uniformly bounded sequence in BV([0, T]; X) with zn(t)  E for all n  N and t  [0, T]. Then there exists a subsequence znk and functions δ, z  BV([0, T]; X) such that

See also

Related Research Articles

<span class="mw-page-title-main">L'Hôpital's rule</span> Mathematical rule for evaluating some limits

L'Hôpital's rule or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.

In probability theory, there exist several different notions of convergence of sequences of random variables, including convergence in probability, convergence in distribution, and almost sure convergence. The different notions of convergence capture different properties about the sequence, with some notions of convergence being stronger than others. For example, convergence in distribution tells us about the limit distribution of a sequence of random variables. This is a weaker notion than convergence in probability, which tells us about the value a random variable will take, rather than just the distribution.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

In mathematics, the Cauchy principal value, named after Augustin-Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.

In mathematics, smooth functions and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.

The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.

In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue.

In the theory of probability, the Glivenko–Cantelli theorem, named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely.

In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limiting average taken around the point. The theorem is named for Henri Lebesgue.

In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

<span class="mw-page-title-main">Dvoretzky–Kiefer–Wolfowitz inequality</span> Statistical inequality

In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality

In mathematical analysis, the Russo–Vallois integral is an extension to stochastic processes of the classical Riemann–Stieltjes integral

In mathematics, specifically in the study of ordinary differential equations, the Peano existence theorem, Peano theorem or Cauchy–Peano theorem, named after Giuseppe Peano and Augustin-Louis Cauchy, is a fundamental theorem which guarantees the existence of solutions to certain initial value problems.

In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.

In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity. Mathematically, if in continuous time has (unilateral) Laplace transform , then a final value theorem establishes conditions under which Likewise, if in discrete time has (unilateral) Z-transform , then a final value theorem establishes conditions under which

In statistics, the Fisher–Tippett–Gnedenko theorem is a general result in extreme value theory regarding asymptotic distribution of extreme order statistics. The maximum of a sample of iid random variables after proper renormalization can only converge in distribution to one of only 3 possible distribution families: the Gumbel distribution, the Fréchet distribution, or the Weibull distribution. Credit for the extreme value theorem and its convergence details are given to Fréchet (1927), Fisher and Tippett (1928), Mises (1936), and Gnedenko (1943).

References

  1. 1 2 Ambrosio, Luigi; Fusco, Nicola; Pallara, Diego (2000). Functions of Bounded Variation and Free Discontinuity Problems. Oxford University Press. doi:10.1093/oso/9780198502456.001.0001. ISBN   9780198502456.