In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.
Uniform integrability is an extension to the notion of a family of functions being dominated in which is central in dominated convergence. Several textbooks on real analysis and measure theory use the following definition: [1] [2]
Definition A: Let be a positive measure space. A set is called uniformly integrable if , and to each there corresponds a such that
whenever and
Definition A is rather restrictive for infinite measure spaces. A more general definition [3] of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.
Definition H: Let be a positive measure space. A set is called uniformly integrable if and only if
where .
Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.
The following result [4] provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.
Theorem 1: If is a (positive) finite measure space, then a set is uniformly integrable if and only if
If in addition , then uniform integrability is equivalent to either of the following conditions
1. .
2.
When the underlying space is -finite, Hunt's definition is equivalent to the following:
Theorem 2: Let be a -finite measure space, and be such that almost everywhere. A set is uniformly integrable if and only if , and for any , there exits such that
whenever .
A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking in Theorem 2.
In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables., [5] [6] [7] that is,
1. A class of random variables is called uniformly integrable if:
or alternatively
2. A class of random variables is called uniformly integrable (UI) if for every there exists such that , where is the indicator function .
Another concept associated with uniform integrability is that of tightness. In this article tightness is taken in a more general setting.
Definition: Suppose measurable space is a measure space. Let be a collection of sets of finite measure. A family is tight with respect to if
A tight family with respect to is just said to be tight.
When the measure space is a metric space equipped with the Borel algebra, is a regular measure, and is the collection of all compact subsets of , the notion of -tightness discussed above coincides with the well known concept of tightness used in the analysis of regular measures in metric spaces
For -finite measure spaces, it can be shown that if a family is uniformly integrable, then is tight. This is capture by the following result which is often used as definition of uniform integrabiliy in the Analysis literature:
Theorem 3: Suppose is a finite measure space. A family is uniformly integrable if and only if
- .
- is tight.
When , condition 3 is redundant (see Theorem 1 above).
There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral [8]
Definition: Suppose is a probability space. A classed of random variables is uniformly absolutely continuous with respect to if for any , there is such that whenever .
It is equivalent to uniform integrability if the measure is finite and has no atoms.
The term "uniform absolute continuity" is not standard,[ citation needed ] but is used by some authors. [9] [10]
The following results apply to the probabilistic definition. [11]
In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of .
A family of random variables is uniformly integrable if and only if [16] there exists a random variable such that and for all , where denotes the increasing convex stochastic order defined by if for all nondecreasing convex real functions .
A sequence converges to in the norm if and only if it converges in measure to and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable. [17] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.