Gumbel distribution

Last updated
Gumbel
Probability density function
Gumbel-Density.svg
Cumulative distribution function
Gumbel-Cumulative.svg
Notation
Parameters location (real)
scale (real)
Support
PDF
where
CDF
Mean
where is the Euler–Mascheroni constant
Median
Mode
Variance
Skewness
Excess kurtosis
Entropy
MGF
CF

In probability theory and statistics, the Gumbel distribution (also known as the type-I generalized extreme value distribution ) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions.

Contents

This distribution might be used to represent the distribution of the maximum level of a river in a particular year if there was a list of maximum values for the past ten years. It is useful in predicting the chance that an extreme earthquake, flood or other natural disaster will occur. The potential applicability of the Gumbel distribution to represent the distribution of maxima relates to extreme value theory, which indicates that it is likely to be useful if the distribution of the underlying sample data is of the normal or exponential type. This article uses the Gumbel distribution to model the distribution of the maximum value. To model the minimum value, use the negative of the original values.

The Gumbel distribution is a particular case of the generalized extreme value distribution (also known as the Fisher–Tippett distribution). It is also known as the log-Weibull distribution and the double exponential distribution (a term that is alternatively sometimes used to refer to the Laplace distribution). It is related to the Gompertz distribution: when its density is first reflected about the origin and then restricted to the positive half line, a Gompertz function is obtained.

In the latent variable formulation of the multinomial logit model — common in discrete choice theory — the errors of the latent variables follow a Gumbel distribution. This is useful because the difference of two Gumbel-distributed random variables has a logistic distribution.

The Gumbel distribution is named after Emil Julius Gumbel (18911966), based on his original papers describing the distribution. [1] [2]

Definitions

The cumulative distribution function of the Gumbel distribution is

Standard Gumbel distribution

The standard Gumbel distribution is the case where and with cumulative distribution function

and probability density function

In this case the mode is 0, the median is , the mean is (the Euler–Mascheroni constant), and the standard deviation is

The cumulants, for n > 1, are given by

Properties

The mode is μ, while the median is and the mean is given by

,

where is the Euler–Mascheroni constant.

The standard deviation is hence [3]

At the mode, where , the value of becomes , irrespective of the value of

If are iid Gumbel random variables with parameters then is also a Gumbel random variable with parameters .

If are iid random variables such that has the same distribution as for all natural numbers , then is necessarily Gumbel distributed with scale parameter (actually it suffices to consider just two distinct values of k>1 which are coprime).

Theory related to the generalized multivariate log-gamma distribution provides a multivariate version of the Gumbel distribution.

Occurrence and applications

Distribution fitting with confidence band of a cumulative Gumbel distribution to maximum one-day October rainfalls. FitGumbelDistr.tif
Distribution fitting with confidence band of a cumulative Gumbel distribution to maximum one-day October rainfalls.

Gumbel has shown that the maximum value (or last order statistic) in a sample of random variables following an exponential distribution minus the natural logarithm of the sample size [7] approaches the Gumbel distribution as the sample size increases. [8]

Concretely, let be the probability distribution of and its cumulative distribution. Then the maximum value out of realizations of is smaller than if and only if all realizations are smaller than . So the cumulative distribution of the maximum value satisfies

and, for large , the right-hand-side converges to

In hydrology, therefore, the Gumbel distribution is used to analyze such variables as monthly and annual maximum values of daily rainfall and river discharge volumes, [3] and also to describe droughts. [9]

Gumbel has also shown that the estimator r(n+1) for the probability of an event where r is the rank number of the observed value in the data series and n is the total number of observations is an unbiased estimator of the cumulative probability around the mode of the distribution. Therefore, this estimator is often used as a plotting position.

In number theory, the Gumbel distribution approximates the number of terms in a random partition of an integer [10] as well as the trend-adjusted sizes of maximal prime gaps and maximal gaps between prime constellations. [11]

It appears in the coupon collector's problem.

Gumbel reparametrization tricks

In machine learning, the Gumbel distribution is sometimes employed to generate samples from the categorical distribution. This technique is called "Gumbel-max trick" and is a special example of "reparametrization tricks". [12]

In detail, let be nonnegative, and not all zero, and let be independent samples of Gumbel(0, 1), then by routine integration,

That is,

Equivalently, given any , we can sample from its Boltzmann distribution by

Related equations include: [13]

Random variate generation

Since the quantile function (inverse cumulative distribution function), , of a Gumbel distribution is given by

the variate has a Gumbel distribution with parameters and when the random variate is drawn from the uniform distribution on the interval .

Probability paper

A piece of graph paper that incorporates the Gumbel distribution. Gumbel paper.JPG
A piece of graph paper that incorporates the Gumbel distribution.

In pre-software times probability paper was used to picture the Gumbel distribution (see illustration). The paper is based on linearization of the cumulative distribution function  :

In the paper the horizontal axis is constructed at a double log scale. The vertical axis is linear. By plotting on the horizontal axis of the paper and the -variable on the vertical axis, the distribution is represented by a straight line with a slope 1. When distribution fitting software like CumFreq became available, the task of plotting the distribution was made easier.

See also

Related Research Articles

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.
<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

<span class="mw-page-title-main">Logistic distribution</span> Continuous probability distribution

In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails. The logistic distribution is a special case of the Tukey lambda distribution.

<span class="mw-page-title-main">Laplace distribution</span> Probability distribution

In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

<span class="mw-page-title-main">Inverse-gamma distribution</span> Two-parameter family of continuous probability distributions

In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution.

<span class="mw-page-title-main">Beta prime distribution</span> Probability distribution

In probability theory and statistics, the beta prime distribution is an absolutely continuous probability distribution. If has a beta distribution, then the odds has a beta prime distribution.

<span class="mw-page-title-main">Generalized Pareto distribution</span> Family of probability distributions often used to model tails or extreme values

In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.

<span class="mw-page-title-main">Log-logistic distribution</span>

In probability and statistics, the log-logistic distribution is a continuous probability distribution for a non-negative random variable. It is used in survival analysis as a parametric model for events whose rate increases initially and decreases later, as, for example, mortality rate from cancer following diagnosis or treatment. It has also been used in hydrology to model stream flow and precipitation, in economics as a simple model of the distribution of wealth or income, and in networking to model the transmission times of data considering both the network and the software.

<span class="mw-page-title-main">Shifted log-logistic distribution</span>

The shifted log-logistic distribution is a probability distribution also known as the generalized log-logistic or the three-parameter log-logistic distribution. It has also been called the generalized logistic distribution, but this conflicts with other uses of the term: see generalized logistic distribution.

The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al. list four forms, which are listed below.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In probability theory, to obtain a nondegenerate limiting distribution of the extreme value distribution, it is necessary to "reduce" the actual greatest value by applying a linear transformation with coefficients that depend on the sample size.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

References

  1. Gumbel, E.J. (1935), "Les valeurs extrêmes des distributions statistiques" (PDF), Annales de l'Institut Henri Poincaré, 5 (2): 115–158
  2. Gumbel E.J. (1941). "The return period of flood flows". The Annals of Mathematical Statistics, 12, 163–190.
  3. 1 2 Oosterbaan, R.J. (1994). "Chapter 6 Frequency and Regression Analysis" (PDF). In Ritzema, H.P. (ed.). Drainage Principles and Applications, Publication 16. Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp.  175–224. ISBN   90-70754-33-9.
  4. Willemse, W.J.; Kaas, R. (2007). "Rational reconstruction of frailty-based mortality models by a generalisation of Gompertz' law of mortality" (PDF). Insurance: Mathematics and Economics. 40 (3): 468. doi:10.1016/j.insmatheco.2006.07.003.
  5. Marques, F.; Coelho, C.; de Carvalho, M. (2015). "On the distribution of linear combinations of independent Gumbel random variables" (PDF). Statistics and Computing. 25 (3): 683‒701. doi:10.1007/s11222-014-9453-5. S2CID   255067312.
  6. CumFreq, software for probability distribution fitting
  7. user49229, Gumbel distribution and exponential distribution
  8. Gumbel, E.J. (1954). Statistical theory of extreme values and some practical applications. Applied Mathematics Series. Vol. 33 (1st ed.). U.S. Department of Commerce, National Bureau of Standards. ASIN   B0007DSHG4.
  9. Burke, Eleanor J.; Perry, Richard H.J.; Brown, Simon J. (2010). "An extreme value analysis of UK drought and projections of change in the future". Journal of Hydrology. 388 (1–2): 131–143. Bibcode:2010JHyd..388..131B. doi:10.1016/j.jhydrol.2010.04.035.
  10. Erdös, Paul; Lehner, Joseph (1941). "The distribution of the number of summands in the partitions of a positive integer". Duke Mathematical Journal. 8 (2): 335. doi:10.1215/S0012-7094-41-00826-8.
  11. Kourbatov, A. (2013). "Maximal gaps between prime k-tuples: a statistical approach". Journal of Integer Sequences. 16. arXiv: 1301.2242 . Bibcode:2013arXiv1301.2242K. Article 13.5.2.
  12. Jang, Eric; Gu, Shixiang; Poole, Ben (April 2017). Categorical Reparametrization with Gumble-Softmax. International Conference on Learning Representations (ICLR) 2017.
  13. Balog, Matej; Tripuraneni, Nilesh; Ghahramani, Zoubin; Weller, Adrian (2017-07-17). "Lost Relatives of the Gumbel Trick". International Conference on Machine Learning. PMLR: 371–379. arXiv: 1706.04161 .