Relationships among probability distributions

Last updated
Relationships among some of univariate probability distributions are illustrated with connected lines. dashed lines means approximate relationship. more info: Relationships among some of univariate probability distributions.jpg
Relationships among some of univariate probability distributions are illustrated with connected lines. dashed lines means approximate relationship. more info:
Relationships between univariate probability distributions in ProbOnto. ProbOnto2.5.jpg
Relationships between univariate probability distributions in ProbOnto.

In probability theory and statistics, there are several relationships among probability distributions. These relations can be categorized in the following groups:

Contents

Special case of distribution parametrization

Transform of a variable

Multiple of a random variable

Multiplying the variable by any positive real constant yields a scaling of the original distribution. Some are self-replicating, meaning that the scaling yields the same family of distributions, albeit with a different parameter: normal distribution, gamma distribution, Cauchy distribution, exponential distribution, Erlang distribution, Weibull distribution, logistic distribution, error distribution, power-law distribution, Rayleigh distribution.

Example:

Linear function of a random variable

The affine transform ax + b yields a relocation and scaling of the original distribution. The following are self-replicating: Normal distribution, Cauchy distribution, Logistic distribution, Error distribution, Power distribution, Rayleigh distribution.

Example:

Reciprocal of a random variable

The reciprocal 1/X of a random variable X, is a member of the same family of distribution as X, in the following cases: Cauchy distribution, F distribution, log logistic distribution.

Examples:

Other cases

Some distributions are invariant under a specific transformation.

Example:

Conversely, if X is a lognormal (μ, σ2) random variable then log X is a normal (μ, σ2) random variable.

Functions of several variables

Sum of variables

The distribution of the sum of independent random variables is the convolution of their distributions. Suppose is the sum of independent random variables each with probability mass functions . Then

If it has a distribution from the same family of distributions as the original variables, that family of distributions is said to be closed under convolution. Often (always?) these distributions are also stable distributions (see also Discrete-stable distribution).

Examples of such univariate distributions are: normal distributions, Poisson distributions, binomial distributions (with common success probability), negative binomial distributions (with common success probability), gamma distributions (with common rate parameter), chi-squared distributions, Cauchy distributions, hyperexponential distributions.

Examples: [3] [4]

Other distributions are not closed under convolution, but their sum has a known distribution:

Product of variables

The product of independent random variables X and Y may belong to the same family of distribution as X and Y: Bernoulli distribution and log-normal distribution.

Example:

(See also Product distribution.)

Minimum and maximum of independent random variables

For some distributions, the minimum value of several independent random variables is a member of the same family, with different parameters: Bernoulli distribution, Geometric distribution, Exponential distribution, Extreme value distribution, Pareto distribution, Rayleigh distribution, Weibull distribution.

Examples:

Similarly, distributions for which the maximum value of several independent random variables is a member of the same family of distribution include: Bernoulli distribution, Power law distribution.

Other

(See also ratio distribution.)

Approximate (limit) relationships

Approximate or limit relationship means

Combination of iid random variables:

Special case of distribution parametrization:

Consequences of the CLT:

Compound (or Bayesian) relationships

When one or more parameter(s) of a distribution are random variables, the compound distribution is the marginal distribution of the variable.

Examples:

Some distributions have been specially named as compounds: beta-binomial distribution, Beta negative binomial distribution, gamma-normal distribution.

Examples:

See also

Related Research Articles

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a die as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In Bayesian probability theory, if the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function .

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

<span class="mw-page-title-main">Generalized inverse Gaussian distribution</span>

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume. It plays an important role for discrete-stable distributions.

In probability and statistics, a compound probability distribution is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with the parameters of that distribution themselves being random variables. If the parameter is a scale parameter, the resulting mixture is also called a scale mixture.

<span class="mw-page-title-main">Lomax distribution</span>

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.

In probability theory and statistics, an inverse distribution is the distribution of the reciprocal of a random variable. Inverse distributions arise in particular in the Bayesian context of prior distributions and posterior distributions for scale parameters. In the algebra of random variables, inverse distributions are special cases of the class of ratio distributions, in which the numerator random variable has a degenerate distribution.

References

  1. LEEMIS, Lawrence M.; Jacquelyn T. MCQUESTON (February 2008). "Univariate Distribution Relationships" (PDF). American Statistician. 62 (1): 45–53. doi:10.1198/000313008x270448. S2CID   9367367.
  2. Swat, MJ; Grenon, P; Wimalaratne, S (2016). "ProbOnto: ontology and knowledge base of probability distributions". Bioinformatics. 32 (17): 2719–21. doi:10.1093/bioinformatics/btw170. PMC   5013898 . PMID   27153608.
  3. Cook, John D. "Diagram of distribution relationships".
  4. Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis; Kalinin, Alex; Christou, Nicolas (2015). "Probability Distributome: a web computational infrastructure for exploring the properties, interrelations, and applications of probability distributions". Computational Statistics. 594 (2): 249–271. doi:10.1007/s00180-015-0594-6. PMC   4856044 . PMID   27158191.