This article includes a list of general references, but it lacks sufficient corresponding inline citations .(April 2013) |
In probability theory and statistics, an inverse distribution is the distribution of the reciprocal of a random variable. Inverse distributions arise in particular in the Bayesian context of prior distributions and posterior distributions for scale parameters. In the algebra of random variables, inverse distributions are special cases of the class of ratio distributions, in which the numerator random variable has a degenerate distribution.
In general, given the probability distribution of a random variable X with strictly positive support, it is possible to find the distribution of the reciprocal, Y = 1 / X. If the distribution of X is continuous with density function f(x) and cumulative distribution function F(x), then the cumulative distribution function, G(y), of the reciprocal is found by noting that
Then the density function of Y is found as the derivative of the cumulative distribution function:
The reciprocal distribution has a density function of the form [1]
where means "is proportional to". It follows that the inverse distribution in this case is of the form
which is again a reciprocal distribution.
Parameters | |||
---|---|---|---|
Support | |||
CDF | |||
Mean | |||
Median | |||
Variance |
If the original random variable X is uniformly distributed on the interval (a,b), where a>0, then the reciprocal variable Y = 1 / X has the reciprocal distribution which takes values in the range (b−1 ,a−1), and the probability density function in this range is
and is zero elsewhere.
The cumulative distribution function of the reciprocal, within the same range, is
For example, if X is uniformly distributed on the interval (0,1), then Y = 1 / X has density and cumulative distribution function when
Let X be a t distributed random variate with k degrees of freedom. Then its density function is
The density of Y = 1 / X is
With k = 1, the distributions of X and 1 / X are identical (X is then Cauchy distributed (0,1)). If k > 1 then the distribution of 1 / X is bimodal.[ citation needed ]
If variable follows a normal distribution , then the inverse or reciprocal follows a reciprocal normal distribution: [2]
If variable X follows a standard normal distribution , then Y = 1/X follows a reciprocal standard normal distribution, heavy-tailed and bimodal, [2] with modes at and density
and the first and higher-order moments do not exist. [2] For such inverse distributions and for ratio distributions, there can still be defined probabilities for intervals, which can be computed either by Monte Carlo simulation or, in some cases, by using the Geary–Hinkley transformation. [3]
However, in the more general case of a shifted reciprocal function , for following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole and the mean is real-valued. The mean of this transformed random variable (reciprocal shifted normal distribution) is then indeed the scaled Dawson's function: [4]
In contrast, if the shift is purely complex, the mean exists and is a scaled Faddeeva function, whose exact expression depends on the sign of the imaginary part, . In both cases, the variance is a simple function of the mean. [5] Therefore, the variance has to be considered in a principal value sense if is real, while it exists if the imaginary part of is non-zero. Note that these means and variances are exact, as they do not recur to linearisation of the ratio. The exact covariance of two ratios with a pair of different poles and is similarly available. [6] The case of the inverse of a complex normal variable , shifted or not, exhibits different characteristics. [4]
If is an exponentially distributed random variable with rate parameter , then has the following cumulative distribution function: for . Note that the expected value of this random variable does not exist. The reciprocal exponential distribution finds use in the analysis of fading wireless communication systems.
If X is a Cauchy distributed (μ, σ) random variable, then 1 / X is a Cauchy ( μ / C, σ / C ) random variable where C = μ2 + σ2.
If X is an F(ν1, ν2 ) distributed random variable then 1 / X is an F(ν2, ν1 ) random variable.
If is distributed according to a Binomial distribution with number of trials and a probability of success then no closed form for the reciprocal distribution is known. However, we can calculate the mean of this distribution.
An asymptotic approximation for the non-central moments of the reciprocal distribution is known. [7]
where O() and o() are the big and little o order functions and is a real number.
For a triangular distribution with lower limit a, upper limit b and mode c, where a < b and a ≤ c ≤ b, the mean of the reciprocal is given by
and the variance by
.
Both moments of the reciprocal are only defined when the triangle does not cross zero, i.e. when a, b, and c are either all positive or all negative.
Other inverse distributions include
Inverse distributions are widely used as prior distributions in Bayesian inference for scale parameters.
A normal distribution or Gaussian distribution is a concept used in probability theory and statistics. The normal distribution concept is applied in numerous disciplines, including education, psychology, economics, business, the sciences and nursing. It is fundamental to intelligence quotients. Real estate prices often fit a normal distribution. Some educators use the bell-shaped curve to determine students' grades, and it is the basis for norm-referenced tests such as nationally used school tests and college entrance exams.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
In probability theory and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is known as a van der Waals profile. It is a special case of the inverse-gamma distribution. It is a stable distribution.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.
In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
In probability theory and statistics, the chi distribution is a continuous probability distribution over the non-negative real line. It is the distribution of the positive square root of a sum of squared independent Gaussian random variables. Equivalently, it is the distribution of the Euclidean distance between a multivariate Gaussian random variable and the origin. The chi distribution describes the positive square roots of a variable obeying a chi-squared distribution.
In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables.
In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).