In probability theory and statistics, the law of the unconscious statistician, or LOTUS, is a theorem which expresses the expected value of a function g(X) of a random variable X in terms of g and the probability distribution of X.
The form of the law depends on the type of random variable X in question. If the distribution of X is discrete and one knows its probability mass function pX, then the expected value of g(X) is
where the sum is over all possible values x of X. If instead the distribution of X is continuous with probability density function fX, then the expected value of g(X) is
Both of these special cases can be expressed in terms of the cumulative probability distribution function FX of X, with the expected value of g(X) now given by the Lebesgue–Stieltjes integral
In even greater generality, X could be a random element in any measurable space, in which case the law is given in terms of measure theory and the Lebesgue integral. In this setting, there is no need to restrict the context to probability measures, and the law becomes a general theorem of mathematical analysis on Lebesgue integration relative to a pushforward measure.
This proposition is (sometimes) known as the law of the unconscious statistician because of a purported tendency to think of the identity as the very definition of the expected value, rather than (more formally) as a consequence of its true definition. [1] The naming is sometimes attributed to Sheldon Ross' textbook Introduction to Probability Models, although he removed the reference in later editions. [2] Many statistics textbooks do present the result as the definition of expected value. [3]
A similar property holds for joint distributions, or equivalently, for random vectors. For discrete random variables X and Y, a function of two variables g, and joint probability mass function : [4]
In the absolutely continuous case, with being the joint probability density function,
A number of special cases are given here. In the simplest case, where the random variable X takes on countably many values (so that its distribution is discrete), the proof is particularly simple, and holds without modification if X is a discrete random vector or even a discrete random element.
The case of a continuous random variable is more subtle, since the proof in generality requires subtle forms of the change-of-variables formula for integration. However, in the framework of measure theory, the discrete case generalizes straightforwardly to general (not necessarily discrete) random elements, and the case of a continuous random variable is then a special case by making use of the Radon–Nikodym theorem.
Suppose that X is a random variable which takes on only finitely or countably many different values x1, x2, ..., with probabilities p1, p2, .... Then for any function g of these values, the random variable g(X) has values g(x1), g(x2), ..., although some of these may coincide with each other. For example, this is the case if X can take on both values 1 and −1 and g(x) = x2.
Let y1, y2, ... enumerate the possible distinct values of , and for each i let Ii denote the collection of all j with g(xj) = yi. Then, according to the definition of expected value, there is
Since a can be the image of multiple, distinct , it holds that
Then the expected value can be rewritten as
This equality relates the average of the outputs of g(X) as weighted by the probabilities of the outputs themselves to the average of the outputs of g(X) as weighted by the probabilities of the outputs of X.
If X takes on only finitely many possible values, the above is fully rigorous. However, if X takes on countably many values, the last equality given does not always hold, as seen by the Riemann series theorem. Because of this, it is necessary to assume the absolute convergence of the sums in question. [5]
Suppose that X is a random variable whose distribution has a continuous density f. If g is a general function, then the probability that g(X) is valued in a set of real numbers K equals the probability that X is valued in g−1(K), which is given by
Under various conditions on g, the change-of-variables formula for integration can be applied to relate this to an integral over K, and hence to identify the density of g(X) in terms of the density of X. In the simplest case, if g is differentiable with nowhere-vanishing derivative, then the above integral can be written as
thereby identifying g(X) as possessing the density f (g−1(y))(g−1)′(y). The expected value of g(X) is then identified as
where the equality follows by another use of the change-of-variables formula for integration. This shows that the expected value of g(X) is encoded entirely by the function g and the density f of X. [6]
The assumption that g is differentiable with nonvanishing derivative, which is necessary for applying the usual change-of-variables formula, excludes many typical cases, such as g(x) = x2. The result still holds true in these broader settings, although the proof requires more sophisticated results from mathematical analysis such as Sard's theorem and the coarea formula. In even greater generality, using the Lebesgue theory as below, it can be found that the identity
holds true whenever X has a density f (which does not have to be continuous) and whenever g is a measurable function for which g(X) has finite expected value. (Every continuous function is measurable.) Furthermore, without modification to the proof, this holds even if X is a random vector (with density) and g is a multivariable function; the integral is then taken over the multi-dimensional range of values of X.
An abstract and general form of the result is available using the framework of measure theory and the Lebesgue integral. Here, the setting is that of a measure space (Ω, μ) and a measurable map X from Ω to a measurable space Ω'. The theorem then says that for any measurable function g on Ω' which is valued in real numbers (or even the extended real number line), there is
(interpreted as saying, in particular, that either side of the equality exists if the other side exists). Here X♯μ denotes the pushforward measure on Ω′. The 'discrete case' given above is the special case arising when X takes on only countably many values and μ is a probability measure. In fact, the discrete case (although without the restriction to probability measures) is the first step in proving the general measure-theoretic formulation, as the general version follows therefrom by an application of the monotone convergence theorem. [7] Without any major changes, the result can also be formulated in the setting of outer measures. [8]
If μ is a σ-finite measure, the theory of the Radon–Nikodym derivative is applicable. In the special case that the measure X♯μ is absolutely continuous relative to some background σ-finite measure ν on Ω′, there is a real-valued function fX on Ω' representing the Radon–Nikodym derivative of the two measures, and then
In the further special case that Ω′ is the real number line, as in the contexts discussed above, it is natural to take ν to be the Lebesgue measure, and this then recovers the 'continuous case' given above whenever μ is a probability measure. (In this special case, the condition of σ-finiteness is vacuous, since Lebesgue measure and every probability measure are trivially σ-finite.) [9]
In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.
A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.
In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
In mathematics, the Riemann–Stieltjes integral is a generalization of the Riemann integral, named after Bernhard Riemann and Thomas Joannes Stieltjes. The definition of this integral was first published in 1894 by Stieltjes. It serves as an instructive and useful precursor of the Lebesgue integral, and an invaluable tool in unifying equivalent forms of statistical theorems that apply to discrete and continuous probability.
In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.
In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
This article discusses how information theory is related to measure theory.
In probability theory and statistical mechanics, the Gaussian free field (GFF) is a Gaussian random field, a central model of random surfaces.
In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.
In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the X axis. The Lebesgue integral, named after French mathematician Henri Lebesgue, extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.