In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value (typically, its expected value). The deviation or other function of the random variable can be thought of as a secondary random variable. The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity. If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration. It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.
Another almost universal example of a secondary random variable is the law of large numbers of classical probability theory which states that sums of independent random variables, under mild conditions, concentrate around their expectation with a high probability. Such sums are the most basic examples of random variables concentrated around their mean.
Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them.[ citation needed ]
Let be a random variable that is non-negative (almost surely). Then, for every constant ,
Note the following extension to Markov's inequality: if is a strictly increasing and non-negative function, then
Chebyshev's inequality requires the following information on a random variable :
Then, for every constant ,
or equivalently,
where is the standard deviation of .
Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable with .
Let X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any
(For a relatively elementary proof see e.g. [1] ).
For a unimodal random variable and , the one-sided Vysochanskij-Petunin inequality [2] holds as follows:
In contrast to most commonly used concentration inequalities, the Paley-Zygmund inequality provides a lower bound on the deviation probability.
The generic Chernoff bound [3] : 63–65 requires the moment generating function of , defined as It always exists, but may be infinite. From Markov's inequality, for every :
and for every :
There are various Chernoff bounds for different distributions and different values of the parameter . See [4] : 5–7 for a compilation of more concentration inequalities.
Let . Then
Let be independent random variables such that, for all i:
Let be their sum, its expected value and its variance:
It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used.
1. Hoeffding's inequality says that:
2. The random variable is a special case of a martingale, and . Hence, the general form of Azuma's inequality can also be used and it yields a similar bound:
This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales. See Fan et al. (2015). [5] Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4.
3. The sum function, , is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most . Hence, McDiarmid's inequality can also be used and it yields a similar bound:
This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way.
4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that:
5. The first of Bernstein's inequalities says that:
This is a generalization of Hoeffding's since it can handle random variables with not only almost-sure bound but both almost-sure bound and variance bound.
6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since .
For example, [6] suppose the variables satisfy , for . Then we have lower tail inequality:
If satisfies , we have upper tail inequality:
If are i.i.d., and is the variance of , a typical version of Chernoff inequality is:
7. Similar bounds can be found in: Rademacher distribution#Bounds on sums
The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function.
Suppose that , are independent with and having the same distribution for all .
Let Then
A proof may be found in e.g.,. [7]
Bretagnolle–Huber–Carol Inequality bounds the difference between a vector of multinomially distributed random variables and a vector of expected values. [8] [9] A simple proof appears in [10] (Appendix Section).
If a random vector is multinomially distributed with parameters and satisfies then
This inequality is used to bound the total variation distance.
The Mason and van Zwet inequality [11] for multinomial random vectors concerns a slight modification of the classical chi-square statistic.
Let the random vector be multinomially distributed with parameters and such that for Then for every and there exist constants such that for all and satisfying and we have
The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function.
Given a natural number , let be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let denote the associated empirical distribution function defined by
So is the probability that a single random variable is smaller than , and is the average number of random variables that are smaller than .
Then
Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate, either on a specific value or range of values. A concrete example is that if you flip a fair coin times, the probability that any given number of heads appears will be less than . This idea can be greatly generalized. For example, a result of Rao and Yehudayoff [12] implies that for any there exists some such that, for any , the following is true for at least values of :
where is drawn uniformly from .
Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem [13] ) and graph theory. [14]
An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities. [15]
In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result that characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria.
In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.
In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.
In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
In probability theory and statistics, the Rademacher distribution is a discrete probability distribution where a random variate X has a 50% chance of being +1 and a 50% chance of being -1.
In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.
In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).
In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.
In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2, then for every positive ,
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.
In probability theory and statistics, the Poisson binomial distribution is the discrete probability distribution of a sum of independent Bernoulli trials that are not necessarily identically distributed. The concept is named after Siméon Denis Poisson.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.