Sub-Gaussian distribution

Last updated

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives subgaussian distributions their name.

Contents

Often in analysis, we divide an object (such as a random variable) into two parts, a central bulk and a distant tail, then analyze each separately. In probability, this division usually goes like "Everything interesting happens near the center. The tail event is so rare, we may safely ignore that." Subgaussian distributions are worthy of study, because the gaussian distribution is well-understood, and so we can give sharp bounds on the rarity of the tail event. Similarly, the subexponential distributions are also worthy of study.

Formally, the probability distribution of a random variable is called subgaussian if there is a positive constant C such that for every ,

.

There are many equivalent definitions. For example, a random variable is sub-Gaussian iff its distribution function is bounded from above (up to a constant) by the distribution function of a Gaussian:

where is constant and is a mean zero Gaussian random variable. [1] :Theorem 2.6

Definitions

Subgaussian norm

The subgaussian norm of , denoted as , isIn other words, it is the Orlicz norm of generated by the Orlicz function By condition below, subgaussian random variables can be characterized as those random variables with finite subgaussian norm.

Variance proxy

If there exists some such that for all , then is called a variance proxy, and the smallest such is called the optimal variance proxy and denoted by .

Since when is Gaussian, we then have , as it should.

Equivalent definitions

Let be a random variable. Let be positive constants. The following conditions are equivalent: (Proposition 2.5.2 [2] )

  1. Tail probability bound: for all ;
  2. Finite subgaussian norm: ;
  3. Moment: for all , where is the Gamma function;
  4. Moment: for all ;
  5. Moment-generating function (of ), or variance proxy [3] [4]  : for all ;
  6. Moment-generating function (of ): for all ;
  7. Union bound: for some c > 0, for all n > c, where are i.i.d copies of X;
  8. Subexponential: has a subexponential distribution.

Furthermore, the constant is the same in the definitions (1) to (5), up to an absolute constant. So for example, given a random variable satisfying (1) and (2), the minimal constants in the two definitions satisfy , where are constants independent of the random variable.

Proof of equivalence

As an example, the first four definitions are equivalent by the proof below.

Proof. By the layer cake representation,


After a change of variables , we find that By the Taylor series which is less than or equal to for . Let , then


By Markov's inequality, by asymptotic formula for gamma function: .

From the proof, we can extract a cycle of three inequalities:

In particular, the constant provided by the definitions are the same up to a constant factor, so we can say that the definitions are equivalent up to a constant independent of .

Similarly, because up to a positive multiplicative constant, for all , the definitions (3) and (4) are also equivalent up to a constant.

Basic properties

Basic properties  * If is subgaussian, and , then and .

means that , where the positive constant is independent of and .

Subgaussian deviation bound  If is subgaussian, then

Proof

By triangle inequality, . Now we have . By the equivalence of definitions (2) and (4) of subgaussianity, we have .

Independent subgaussian sum bound  If are subgaussian and independent, then

Proof

If independent, then use that the cumulant of independent random variables is additive. That is, .

If not independent, then by Hölder's inequality, for any we have Solving the optimization problem , we obtain the result.

Corollary  Linear sums of subgaussian random variables are subgaussian.

Partial converse (Matoušek 2008, Lemma 2.4)  If , and for all , then where depends on only.

Proof
Proof

Let be the CDF of . The proof splits the integral of MGF to two halves, one with and one with , and bound each one respectively.

Since for , For the second term, upper bound it by a summation: When , for any , , so

When , by drawing out the curve of , and plotting out the summation, we find that Now verify that , where depends on only.

Corollary (Matoušek 2008, Lemma 2.2)   are independent random variables with the same upper subgaussian tail: for all . Also, , then for any unit vector , the linear sum has a subgaussian tail:
where depends only on .


Concentration

Gaussian concentration inequality for Lipschitz functions (Tao 2012, Theorem 2.1.12.)  If is -Lipschitz, and is a standard gaussian vector, then concentrates around its expectation at a rate and similarly for the other tail.

Proof
Proof

By shifting and scaling, it suffices to prove the case where , and .

Since every 1-Lipschitz function is uniformly approximable by 1-Lipschitz smooth functions (by convolving with a mollifier), it suffices to prove it for 1-Lipschitz smooth functions.

Now it remains to bound the cumulant generating function.

To exploit the Lipschitzness, we introduce , an independent copy of , then by Jensen,

By the circular symmetry of gaussian variables, we introduce . This has the benefit that its derivative is independent of it.

Now take its expectation, The expectation within the integral is over the joint distribution of , but since the joint distribution of is exactly the same, we have

Conditional on , the quantity is normally distributed, with variance , so

Thus, we have


Strictly subgaussian

Expanding the cumulant generating function:we find that . At the edge of possibility, we define that a random variable satisfying is called strictly subgaussian.

Properties


Theorem. [5] Let be a subgaussian random variable with mean zero. If all zeros of its characteristic function are real, then is strictly subgaussian.

Corollary. If are independent and strictly subgaussian, then any linear sum of them is strictly subgaussian.

Examples

By calculating the characteristic functions, we can show that some distributions are strictly subgaussian: symmetric uniform distribution, symmetric Bernoulli distribution.

Since a symmetric uniform distribution is strictly subgaussian, its convolution with itself is strictly subgaussian. That is, the symmetric triangular distribution is strictly subgaussian.

Since the symmetric Bernoulli distribution is strictly subgaussian, any symmetric Binomial distribution is strictly subgaussian.

Examples

strictly subgaussian?
gaussian distribution Yes
mean-zero Bernoulli distribution solution to Iff
symmetric Bernoulli distribution Yes
uniform distribution solution to , approximately 0.7727Yes
arbitrary distribution on interval

The optimal variance proxy is known for many standard probability distributions, including the beta, Bernoulli, Dirichlet [6] , Kumaraswamy, triangular [7] , truncated Gaussian, and truncated exponential. [8]

Bernoulli distribution

Let be two positive numbers. Let be a centered Bernoulli distribution , so that it has mean zero, then . [5] Its subgaussian norm is where is the unique positive solution to .

Let be a random variable with symmetric Bernoulli distribution (or Rademacher distribution). That is, takes values and with probabilities each. Since , it follows thatand hence is a subgaussian random variable.

Bounded distributions

Some commonly used bounded distributions. Bounded probability distributions, compared with the normal distribution.svg
Some commonly used bounded distributions.

Bounded distributions have no tail at all, so clearly they are subgaussian.

If is bounded within the interval , Hoeffding's lemma states that . Hoeffding's inequality is the Chernoff bound obtained using this fact.

Convolutions

Density of a mixture of three normal distributions (m = 5, 10, 15, s = 2) with equal weights. Each component is shown as a weighted density (each integrating to 1/3) Gaussian-mixture-example.svg
Density of a mixture of three normal distributions (μ = 5, 10, 15, σ = 2) with equal weights. Each component is shown as a weighted density (each integrating to 1/3)

Since the sum of subgaussian random variables is still subgaussian, the convolution of subgaussian distributions is still subgaussian. In particular, any convolution of the normal distribution with any bounded distribution is subgaussian.

Mixtures

Given subgaussian distributions , we can construct an additive mixture as follows: first randomly pick a number , then pick .

Since we have , and so the mixture is subgaussian.

In particular, any gaussian mixture is subgaussian.

More generally, the mixture of infinitely many subgaussian distributions is also subgaussian, if the subgaussian norm has a finite supremum: .

Subgaussian random vectors

So far, we have discussed subgaussianity for real-valued random variables. We can also define subgaussianity for random vectors. The purpose of subgaussianity is to make the tails decay fast, so we generalize accordingly: a subgaussian random vector is a random vector where the tail decays fast.

Let be a random vector taking values in .

Define.

Theorem. (Theorem 3.4.6 [2] ) For any positive integer , the uniformly distributed random vector is subgaussian, with .

This is not so surprising, because as , the projection of to the first coordinate converges in distribution to the standard normal distribution.

Maximum inequalities

Proposition. If are mean-zero subgaussians, with , then for any , we have with probability .

Proof. By the Chernoff bound, . Now apply the union bound.

Proposition. (Exercise 2.5.10 [2] ) If are subgaussians, with , then Further, the bound is sharp, since when are IID samples of we have . [9]

[10]

Theorem. (over a finite set) If are subgaussian, with , thenTheorem. (over a convex polytope) Fix a finite set of vectors . If is a random vector, such that each , then the above 4 inequalities hold, with replacing .

Here, is the convex polytope spanned by the vectors .

Theorem. (over a ball) If is a random vector in , such that for all on the unit sphere , then For any , with probability at least ,

Inequalities

Theorem. (Theorem 2.6.1 [2] ) There exists a positive constant such that given any number of independent mean-zero subgaussian random variables , Theorem. (Hoeffding's inequality) (Theorem 2.6.3 [2] ) There exists a positive constant such that given any number of independent mean-zero subgaussian random variables ,Theorem. (Bernstein's inequality) (Theorem 2.8.1 [2] ) There exists a positive constant such that given any number of independent mean-zero subexponential random variables ,Theorem. (Khinchine inequality) (Exercise 2.6.5 [2] ) There exists a positive constant such that given any number of independent mean-zero variance-one subgaussian random variables , any , and any ,

Hanson-Wright inequality

The Hanson-Wright inequality states that if a random vector is subgaussian in a certain sense, then any quadratic form of this vector, , is also subgaussian/subexponential. Further, the upper bound on the tail of , is uniform.

A weak version of the following theorem was proved in (Hanson, Wright, 1971). [11] There are many extensions and variants. Much like the central limit theorem, the Hanson-Wright inequality is more a cluster of theorems with the same purpose, than a single theorem. The purpose is to take a subgaussian vector and uniformly bound its quadratic forms.

Theorem. [12] [13] There exists a constant , such that:

Let be a positive integer. Let be independent random variables, such that each satisfies . Combine them into a random vector . For any matrix , we havewhere , and is the Frobenius norm of the matrix, and is the operator norm of the matrix.

In words, the quadratic form has its tail uniformly bounded by an exponential, or a gaussian, whichever is larger.


In the statement of the theorem, the constant is an "absolute constant", meaning that it has no dependence on . It is a mathematical constant much like pi and e.

Consequences

Theorem (subgaussian concentration). [12] There exists a constant , such that:

Let be positive integers. Let be independent random variables, such that each satisfies . Combine them into a random vector . For any matrix , we haveIn words, the random vector is concentrated on a spherical shell of radius , such that is subgaussian, with subgaussian norm .

See also

Notes

  1. Wainwright MJ. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge: Cambridge University Press; 2019. doi:10.1017/9781108627771, ISBN   9781108627771.
  2. 1 2 3 4 5 6 7 Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science. Cambridge: Cambridge University Press.
  3. Kahane, J. (1960). "Propriétés locales des fonctions à séries de Fourier aléatoires". Studia Mathematica. 19: 1–25. doi:10.4064/sm-19-1-1-25.
  4. Buldygin, V. V.; Kozachenko, Yu. V. (1980). "Sub-Gaussian random variables". Ukrainian Mathematical Journal. 32 (6): 483–489. doi:10.1007/BF01087176.
  5. 1 2 Bobkov, S. G.; Chistyakov, G. P.; Götze, F. (2023-08-03). "Strictly subgaussian probability distributions". arXiv: 2308.01749 [math.PR].
  6. Marchal, Olivier; Arbel, Julyan (2017). "On the sub-Gaussianity of the Beta and Dirichlet distributions". Electronic Communications in Probability. 22. arXiv: 1705.00048 . doi:10.1214/17-ECP92.
  7. Arbel, Julyan; Marchal, Olivier; Nguyen, Hien D. (2020). "On strict sub-Gaussianity, optimal proxy variance and symmetry for bounded random variables". Esaim: Probability and Statistics. 24: 39–55. arXiv: 1901.09188 . doi:10.1051/ps/2019018.
  8. Barreto, Mathias; Marchal, Olivier; Arbel, Julyan (2024). "Optimal sub-Gaussian variance proxy for truncated Gaussian and exponential random variables". arXiv: 2403.08628 [math.ST].
  9. Kamath, Gautam. "Bounds on the expectation of the maximum of samples from a gaussian." (2015)
  10. "MIT 18.S997 | Spring 2015 | High-Dimensional Statistics, Chapter 1. Sub-Gaussian Random Variables" (PDF). MIT OpenCourseWare. Retrieved 2024-04-03.
  11. Hanson, D. L.; Wright, F. T. (1971). "A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables". The Annals of Mathematical Statistics. 42 (3): 1079–1083. doi: 10.1214/aoms/1177693335 . ISSN   0003-4851. JSTOR   2240253.
  12. 1 2 Rudelson, Mark; Vershynin, Roman (January 2013). "Hanson-Wright inequality and sub-gaussian concentration". Electronic Communications in Probability. 18 (none): 1–9. arXiv: 1306.2872 . doi:10.1214/ECP.v18-2865. ISSN   1083-589X.
  13. Vershynin, Roman (2018). "6. Quadratic Forms, Symmetrization, and Contraction". High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. 127–146. doi:10.1017/9781108231596.009. ISBN   978-1-108-41519-4.

Related Research Articles

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Uncertainty principle</span> Foundational principle in quantum physics

The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Chi-squared distribution</span> Probability distribution and special case of gamma distribution

In probability theory and statistics, the chi-squared distribution with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter α and a scale parameter θ
  2. With a shape parameter and a rate parameter
<span class="mw-page-title-main">Gudermannian function</span> Mathematical function relating circular and hyperbolic functions

In mathematics, the Gudermannian function relates a hyperbolic angle measure to a circular angle measure called the gudermannian of and denoted . The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830. The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude when parameter

In probability theory, the Azuma–Hoeffding inequality gives a concentration result for the values of martingales that have bounded differences.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.

<span class="mw-page-title-main">Generalized Pareto distribution</span> Family of probability distributions often used to model tails or extreme values

In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

<span class="mw-page-title-main">Half-normal distribution</span> Probability distribution

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value. The deviation or other function of the random variable can be thought of as a secondary random variable. The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity. If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration. It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.

References