Probability density function | |||
Notation | Tukey(λ) | ||
---|---|---|---|
Parameters | λ ∈ ℝ — shape parameter | ||
Support | x ∈ [ − 1 /λ, 1 /λ] if λ > 0 , x ∈ ℝ if λ ≤ 0 . | ||
CDF | (general case) (special case exact solution) | ||
Mean | |||
Median | 0 | ||
Mode | 0 | ||
Variance | | ||
Skewness | |||
Excess kurtosis | | ||
Entropy | [1] | ||
CF | [2] |
Formalized by John Tukey, the Tukey lambda distribution is a continuous, symmetric probability distribution defined in terms of its quantile function. It is typically used to identify an appropriate distribution (see the comments below) and not used in statistical models directly.
The Tukey lambda distribution has a single shape parameter, λ, and as with other probability distributions, it can be transformed with a location parameter, μ, and a scale parameter, σ. Since the general form of probability distribution can be expressed in terms of the standard distribution, the subsequent formulas are given for the standard form of the function.
For the standard form of the Tukey lambda distribution, the quantile function, (i.e. the inverse function to the cumulative distribution function) and the quantile density function, are
For most values of the shape parameter, λ, the probability density function (PDF) and cumulative distribution function (CDF) must be computed numerically. The Tukey lambda distribution has a simple, closed form for the CDF and / or PDF only for a few exceptional values of the shape parameter, for example: λ∈{ 2, 1, 1 /2, 0 } (see uniform distribution [ cases λ = 1 and λ = 2 ] and the logistic distribution [ case λ = 0 ].
However, for any value of λ both the CDF and PDF can be tabulated for any number of cumulative probabilities, p, using the quantile function Q to calculate the value x, for each cumulative probability p, with the probability density given by 1/q, the reciprocal of the quantile density function. As is the usual case with statistical distributions, the Tukey lambda distribution can readily be used by looking up values in a prepared table.
The Tukey lambda distribution is symmetric around zero, therefore the expected value of this distribution, if it exists, is equal to zero. The variance exists for λ > − 1 /2 , and except when λ = 0 , is given by the formula
More generally, the n-th order moment is finite when λ > −1 /n and is expressed (except when λ = 0 ) in terms of the beta function Β(x,y) :
Due to symmetry of the density function, all moments of odd orders, if they exist, are equal to zero.
Differently from the central moments, L-moments can be expressed in a closed form. For the th L-moment, is given by [3]
The first six L-moments can be presented as follows: [3]
The Tukey lambda distribution is actually a family of distributions that can approximate a number of common distributions. For example,
λ ≈ −1 | approx. Cauchy C( 0, π ) |
λ = 0 | exactly logistic |
λ ≈ 0.14 | approx. normal N( 0, 2.142± ) |
λ = 1 /2 | strictly concave (-shaped) |
λ = 1 | exactly uniform U( −1, +1 ) |
λ = 2 | exactly uniform U( − 1 /2 , + 1 /2) |
The most common use of this distribution is to generate a Tukey lambda PPCC plot of a data set. Based on the value for λ with best correlation, as shown on the PPCC plot, an appropriate model for the data is suggested. For example, if the best-fit of the curve to the data occurs for a value of λ at or near 0.14, then empirically the data could be well-modeled with a normal distribution. Values of λ less than 0.14 suggests a heavier-tailed distribution.
A milepost at λ = 0 (logistic) would indicate quite fat tails, with the extreme limit at λ = −1 , approximating Cauchy and small sample versions of the Student's t. That is, as the best-fit value of λ varies from thin tails at 0.14 towards fat tails −1, a bell-shaped PDF with increasingly heavy tails is suggested. Similarly, an optimal curve-fit value of λ greater than 0.14 suggests a distribution with exceptionally thin tails (based on the point of view that the normal distribution itself is thin-tailed to begin with; the exponential distribution is often chosen as the exemplar of tails intermediate between fat and thin).
Except for values of λ approaching 0 and those below, all the PDF functions discussed have finite support, between −1 /|λ| and +1 / |λ| .
Since the Tukey lambda distribution is a symmetric distribution, the use of the Tukey lambda PPCC plot to determine a reasonable distribution to model the data only applies to symmetric distributions. A histogram of the data should provide evidence as to whether the data can be reasonably modeled with a symmetric distribution. [4]
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on some dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.
In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.
In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution, is a continuous probability distribution that arises frequently as the null distribution of a test statistic, most notably in the analysis of variance (ANOVA) and other F-tests.
In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails. The logistic distribution is a special case of the Tukey lambda distribution.
In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability and statistics, the quantile function outputs the value of a random variable such that its probability is less than or equal to an input probability value. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function, percent-point function, inverse cumulative distribution function or inverse distribution function.
In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.
In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.
In statistics, L-moments are a sequence of statistics used to summarize the shape of a probability distribution. They are linear combinations of order statistics (L-statistics) analogous to conventional moments, and can be used to calculate quantities analogous to standard deviation, skewness and kurtosis, termed the L-scale, L-skewness and L-kurtosis respectively. Standardised L-moments are called L-moment ratios and are analogous to standardized moments. Just as for conventional moments, a theoretical distribution has a set of population L-moments. Sample L-moments can be defined for a sample from the population, and can be used as estimators of the population L-moments.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
This article incorporates public domain material from the National Institute of Standards and Technology