In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure.
Consider the unit square in the Euclidean plane . Consider the probability measure defined on by the restriction of two-dimensional Lebesgue measure to . That is, the probability of an event is simply the area of . We assume is a measurable subset of .
Consider a one-dimensional subset of such as the line segment . has -measure zero; every subset of is a -null set; since the Lebesgue measure space is a complete measure space,
While true, this is somewhat unsatisfying. It would be nice to say that "restricted to" is the one-dimensional Lebesgue measure , rather than the zero measure. The probability of a "two-dimensional" event could then be obtained as an integral of the one-dimensional probabilities of the vertical "slices" : more formally, if denotes one-dimensional Lebesgue measure on , then for any "nice" . The disintegration theorem makes this argument rigorous in the context of measures on metric spaces.
(Hereafter, will denote the collection of Borel probability measures on a topological space .) The assumptions of the theorem are as follows:
The conclusion of the theorem: There exists a -almost everywhere uniquely determined family of probability measures , which provides a "disintegration" of into , such that:
This section needs additional citations for verification .(May 2022) |
The original example was a special case of the problem of product spaces, to which the disintegration theorem applies.
When is written as a Cartesian product and is the natural projection, then each fibre can be canonically identified with and there exists a Borel family of probability measures in (which is -almost everywhere uniquely determined) such that which is in particular[ clarification needed ] and
The relation to conditional expectation is given by the identities
The disintegration theorem can also be seen as justifying the use of a "restricted" measure in vector calculus. For instance, in Stokes' theorem as applied to a vector field flowing through a compact surface , it is implicit that the "correct" measure on is the disintegration of three-dimensional Lebesgue measure on , and that the disintegration of this measure on ∂Σ is the same as the disintegration of on . [2]
The disintegration theorem can be applied to give a rigorous treatment of conditional probability distributions in statistics, while avoiding purely abstract formulations of conditional probability. [3] The theorem is related to the Borel–Kolmogorov paradox, for example.
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets. Some authors require additional restrictions on the measure, as described below.
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations of measure are widely used in quantum physics and physics in general.
In mathematical analysis, a null set is a Lebesgue measurable set of real numbers that has measure zero. This can be characterized as a set that can be covered by a countable union of intervals of arbitrarily small total length.
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.
In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, the Radon–Nikodym theorem is a result in measure theory that expresses the relationship between two measures defined on the same measurable space. A measure is a set function that assigns a consistent magnitude to the measurable subsets of a measurable space. Examples of a measure include area and volume, where the subsets are sets of points; or the probability of an event, which is a subset of possible outcomes within a wider probability space.
In probability theory, the Borel–Kolmogorov paradox is a paradox relating to conditional probability with respect to an event of probability zero. It is named after Émile Borel and Andrey Kolmogorov.
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
In mathematics and functional analysis, a direct integral or Hilbert integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in 1949 by John von Neumann in one of the papers in the series On Rings of Operators. One of von Neumann's goals in this paper was to reduce the classification of von Neumann algebras on separable Hilbert spaces to the classification of so-called factors. Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings.
In functional analysis, an abelian von Neumann algebra is a von Neumann algebra of operators on a Hilbert space in which all elements commute.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In mathematics, a π-system on a set is a collection of certain subsets of such that
In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space , closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are named after the German mathematician Carl Friedrich Gauss. One reason why Gaussian measures are so ubiquitous in probability theory is the central limit theorem. Loosely speaking, it states that if a random variable is obtained by summing a large number of independent random variables with variance 1, then has variance and its law is approximately Gaussian.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In mathematics, the Lévy–Prokhorov metric is a metric on the collection of probability measures on a given metric space. It is named after the French mathematician Paul Lévy and the Soviet mathematician Yuri Vasilyevich Prokhorov; Prokhorov introduced it in 1956 as a generalization of the earlier Lévy metric.
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.
In mathematics, lifting theory was first introduced by John von Neumann in a pioneering paper from 1931, in which he answered a question raised by Alfréd Haar. The theory was further developed by Dorothy Maharam (1958) and by Alexandra Ionescu Tulcea and Cassius Ionescu Tulcea (1961). Lifting theory was motivated to a large extent by its striking applications. Its development up to 1969 was described in a monograph of the Ionescu Tulceas. Lifting theory continued to develop since then, yielding new results and applications.
In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.