Uniform integrability

Last updated

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

Contents

Measure-theoretic definition

Uniform integrability is an extension to the notion of a family of functions being dominated in which is central in dominated convergence. Several textbooks on real analysis and measure theory use the following definition: [1] [2]

Definition A: Let be a positive measure space. A set is called uniformly integrable if , and to each there corresponds a such that

whenever and

Definition A is rather restrictive for infinite measure spaces. A more general definition [3] of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.

Definition H: Let be a positive measure space. A set is called uniformly integrable if and only if

where .


Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.

The following result [4] provides another equivalent another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.

Theorem 1: If is a (positive) finite measure space, then a set is uniformly integrable if and only if

If in addition , then uniform integrability is equivalent to either of the following conditions

1. .

2.

When the underlying space is -finite, Hunt's definition is equivalent to the following:

Theorem 2: Let be a -finite measure space, and be such that almost surely. A set is uniformly integrable if and only if , and for any , there exits such that

whenever .

A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking in Theorem 2.

Probability definition

In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables., [5] [6] [7] that is,

1. A class of random variables is called uniformly integrable if:

or alternatively

2. A class of random variables is called uniformly integrable (UI) if for every there exists such that , where is the indicator function .

Tightness and uniform integrability

One consequence of uniformly integrability of a class of random variables is that family of laws or distributions is tight. That is, for each , there exists such that

for all . [8]

This however, does not mean that the family of measures is tight. (In any case, tightness would require a topology on in order to be defined.)

Uniform absolute continuity

There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral [9]

Definition: Suppose is a probability space. A classed of random variables is uniformly absolutely continuous with respect to if for any , there is such that whenever .

It is equivalent to uniform integrability if the measure is finite and has no atoms.

The term "uniform absolute continuity" is not standard,[ citation needed ] but is used by some authors. [10] [11]

The following results apply to the probabilistic definition. [12]

Non-UI sequence of RVs. The area under the strip is always equal to 1, but
X
n
-
0
{\displaystyle X_{n}\to 0}
pointwise. Uniform integrability.png
Non-UI sequence of RVs. The area under the strip is always equal to 1, but pointwise.

Relevant theorems

In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of .

Relation to convergence of random variables

A sequence converges to in the norm if and only if it converges in measure to and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable. [17] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.

Citations

  1. Rudin, Walter (1987). Real and Complex Analysis (3 ed.). Singapore: McGraw–Hill Book Co. p. 133. ISBN   0-07-054234-1.
  2. Royden, H.L. & Fitzpatrick, P.M. (2010). Real Analysis (4 ed.). Boston: Prentice Hall. p. 93. ISBN   978-0-13-143747-0.
  3. Hunt, G. A. (1966). Martingales et Processus de Markov. Paris: Dunod. p. 254.
  4. Klenke, A. (2008). Probability Theory: A Comprehensive Course. Berlin: Springer Verlag. pp. 134–137. ISBN   978-1-84800-047-6.
  5. Williams, David (1997). Probability with Martingales (Repr. ed.). Cambridge: Cambridge Univ. Press. pp. 126–132. ISBN   978-0-521-40605-5.
  6. Gut, Allan (2005). Probability: A Graduate Course. Springer. pp. 214–218. ISBN   0-387-22833-0.
  7. Bass, Richard F. (2011). Stochastic Processes. Cambridge: Cambridge University Press. pp. 356–357. ISBN   978-1-107-00800-7.
  8. Gut 2005, p. 236.
  9. Bass 2011, p. 356.
  10. Benedetto, J. J. (1976). Real Variable and Integration. Stuttgart: B. G. Teubner. p. 89. ISBN   3-519-02209-5.
  11. Burrill, C. W. (1972). Measure, Integration, and Probability. McGraw-Hill. p. 180. ISBN   0-07-009223-0.
  12. Gut 2005, pp. 215–216.
  13. Dunford, Nelson (1938). "Uniformity in linear spaces". Transactions of the American Mathematical Society. 44 (2): 305–356. doi: 10.1090/S0002-9947-1938-1501971-X . ISSN   0002-9947.
  14. Dunford, Nelson (1939). "A mean ergodic theorem". Duke Mathematical Journal. 5 (3): 635–646. doi:10.1215/S0012-7094-39-00552-1. ISSN   0012-7094.
  15. Meyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).
  16. Poussin, C. De La Vallee (1915). "Sur L'Integrale de Lebesgue". Transactions of the American Mathematical Society. 16 (4): 435–501. doi:10.2307/1988879. hdl: 10338.dmlcz/127627 . JSTOR   1988879.
  17. Bogachev, Vladimir I. (2007). "The spaces Lp and spaces of measures". Measure Theory Volume I. Berlin Heidelberg: Springer-Verlag. p. 268. doi:10.1007/978-3-540-34514-5_4. ISBN   978-3-540-34513-8.

Related Research Articles

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

<span class="mw-page-title-main">Noether's theorem</span> Statement relating differentiable symmetries to conserved quantities

Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev, and many sources, especially in analysis, refer to it as Chebyshev's inequality or Bienaymé's inequality.

In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

<span class="mw-page-title-main">Vapnik–Chervonenkis theory</span> Branch of statistical computational learning theory

Vapnik–Chervonenkis theory was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

<span class="mw-page-title-main">Mixing (mathematics)</span>

In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing.

<span class="mw-page-title-main">Classical Wiener space</span>

In mathematics, classical Wiener space is the collection of all continuous functions on a given domain, taking values in a metric space. Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

The Titchmarsh convolution theorem describes the properties of the support of the convolution of two functions. It was proven by Edward Charles Titchmarsh in 1926.

In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in Lp in terms of convergence in measure and a condition related to uniform integrability.

In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.

The exponential mechanism is a technique for designing differentially private algorithms. It was developed by Frank McSherry and Kunal Talwar in 2007. Their work was recognized as a co-winner of the 2009 PET Award for Outstanding Research in Privacy Enhancing Technologies.

Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.

In geometry, a valuation is a finitely additive function from a collection of subsets of a set to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of Other examples of valuations on finite unions of convex bodies of are surface area, mean width, and Euler characteristic.

References