A quantile-parameterized distribution (QPD) is a probability distributions that is directly parameterized by data. They were created to meet the need for easy-to-use continuous probability distributions flexible enough to represent a wide range of uncertainties, such as those commonly encountered in business, economics, engineering, and science. Because QPDs are directly parameterized by data, they have the practical advantage of avoiding the intermediate step of parameter estimation, a time-consuming process that typically requires non-linear iterative methods to estimate probability-distribution parameters from data. Some QPDs have virtually unlimited shape flexibility and closed-form moments as well.
The development of quantile-parameterized distributions was inspired by the practical need for flexible continuous probability distributions that are easy to fit to data. Historically, the Pearson [1] and Johnson [2] [3] families of distributions have been used when shape flexibility is needed. That is because both families can match the first four moments (mean, variance, skewness, and kurtosis) of any data set. In many cases, however, these distributions are either difficult to fit to data or not flexible enough to fit the data appropriately.
For example, the beta distribution is a flexible Pearson distribution that is frequently used to model percentages of a population. However, if the characteristics of this population are such that the desired cumulative distribution function (CDF) should run through certain specific CDF points, there may be no beta distribution that meets this need. Because the beta distribution has only two shape parameters, it cannot, in general, match even three specified CDF points. Moreover, the beta parameters that best fit such data can be found only by nonlinear iterative methods.
Practitioners of decision analysis, needing distributions easily parameterized by three or more CDF points (e.g., because such points were specified as the result of an expert-elicitation process), originally invented quantile-parameterized distributions for this purpose. Keelin and Powley (2011) [4] provided the original definition. Subsequently, Keelin (2016) [5] developed the metalog distributions, a family of quantile-parameterized distributions that has virtually unlimited shape flexibility, simple equations, and closed-form moments.
Keelin and Powley [4] define a quantile-parameterized distribution as one whose quantile function (inverse CDF) can be written in the form
where
and the functions are continuously differentiable and linearly independent basis functions. Here, essentially, and are the lower and upper bounds (if they exist) of a random variable with quantile function . These distributions are called quantile-parameterized because for a given set of quantile pairs , where , and a set of basis functions , the coefficients can be determined by solving a set of linear equations. [4] If one desires to use more quantile pairs than basis functions, then the coefficients can be chosen to minimize the sum of squared errors between the stated quantiles and . Keelin and Powley [4] illustrate this concept for a specific choice of basis functions that is a generalization of quantile function of the normal distribution, , for which the mean and standard deviation are linear functions of cumulative probability :
The result is a four-parameter distribution that can be fit to a set of four quantile/probability pairs exactly, or to any number of such pairs by linear least squares. Keelin and Powley [4] call this the Simple Q-Normal distribution. Some skewed and symmetric Simple Q-Normal PDFs are shown in the figures below.
QPD’s that meet Keelin and Powley’s definition have the following properties.
Differentiating with respect to yields . The reciprocal of this quantity, , is the probability density function (PDF)
where . Note that this PDF is expressed as a function of cumulative probability rather than . To plot it, as shown in the figures, vary parametrically. Plot on the horizontal axis and on the vertical axis.
A function of the form of is a feasible probability distribution if and only if for all . [4] This implies a feasibility constraint on the set of coefficients :
In practical applications, feasibility must generally be checked rather than assumed.
A QPD’s set of feasible coefficients for all is convex. Because convex optimization requires convex feasible sets, this property simplifies optimization applications involving QPDs.
The coefficients can be determined from data by linear least squares. Given data points that are intended to characterize the CDF of a QPD, and matrix whose elements consist of , then, so long as is invertible, coefficients' column vector can be determined as , where and column vector . If , this equation reduces to , where the resulting CDF runs through all data points exactly. An alternate method, implemented as a linear program, determines the coefficients by minimizing the sum of absolute distances between the CDF and the data subject to feasibility constraints. [6]
A QPD with terms, where , has shape parameters. Thus, QPDs can be far more flexible than the Pearson distributions, which have at most two shape parameters. For example, ten-term metalog distributions parameterized by 105 CDF points from 30 traditional source distributions (including normal, student-t, lognormal, gamma, beta, and extreme value) have been shown to approximate each such source distribution within a K–S distance of 0.001 or less. [7]
QPD transformations are governed by a general property of quantile functions: for any quantile function and increasing function is a quantile function. [8] For example, the quantile function of the normal distribution, , is a QPD by the Keelin and Powley definition. The natural logarithm, , is an increasing function, so is the quantile function of the lognormal distribution with lower bound . Importantly, this transformation converts an unbounded QPD into a semi-bounded QPD. Similarly, applying this log transformation to the unbounded metalog distribution [9] yields the semi-bounded (log) metalog distribution; [10] likewise, applying the logit transformation, , yields the bounded (logit) metalog distribution [10] with lower and upper bounds and , respectively. Moreover, by considering to be distributed, where is any QPD that meets Keelin and Powley’s definition, the transformed variable maintains the above properties of feasibility, convexity, and fitting to data. Such transformed QPDs have greater shape flexibility than the underlying , which has shape parameters; the log transformation has shape parameters, and the logit transformation has shape parameters. Moreover, such transformed QPDs share the same set of feasible coefficients as the underlying untransformed QPD. [11]
The moment of a QPD is: [4]
Whether such moments exist in closed form depends on the choice of QPD basis functions . The unbounded metalog distribution and polynomial QPDs are examples of QPDs for which moments exist in closed form as functions of the coefficients .
Since the quantile function is expressed in closed form, Keelin and Powley QPDs facilitate Monte Carlo simulation. Substituting in uniformly distributed random samples of produces random samples of in closed form, thereby eliminating the need to invert a CDF expressed as .
The following probability distributions are QPDs according to Keelin and Powley’s definition:
Like the SPT metalog distributions, the Johnson Quantile-Parameterized Distributions [14] [15] (JQPDs) are parameterized by three quantiles. JQPDs do not meet Keelin and Powley’s QPD definition, but rather have their own properties. JQPDs are feasible for all SPT parameter sets that are consistent with the rules of probability.
The original applications of QPDs were by decision analysts wishing to conveniently convert expert-assessed quantiles (e.g., 10th, 50th, and 90th quantiles) into smooth continuous probability distributions. QPDs have also been used to fit output data from simulations in order to represent those outputs (both CDFs and PDFs) as closed-form continuous distributions. [16] Used in this way, they are typically more stable and smoother than histograms. Similarly, since QPDs can impose fewer shape constraints than traditional distributions, they have been used to fit a wide range of empirical data in order to represent those data sets as continuous distributions (e.g., reflecting bimodality that may exist in the data in a straightforward manner [17] ). Quantile parameterization enables a closed-form QPD representation of known distributions whose CDFs otherwise have no closed-form expression. Keelin et al. (2019) [18] apply this to the sum of independent identically distributed lognormal distributions, where quantiles of the sum can be determined by a large number of simulations. Nine such quantiles are used to parameterize a semi-bounded metalog distribution that runs through each of these nine quantiles exactly. QPDs have also been applied to assess the risks of asteroid impact, [19] cybersecurity, [6] [20] biases in projections of oil-field production when compared to observed production after the fact, [21] and future Canadian population projections based on combining the probabilistic views of multiple experts. [22] See metalog distributions and Keelin (2016) [5] for additional applications of the metalog distribution.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.
In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:
In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.
In probability theory and statistics, the Gumbel distribution is used to model the distribution of the maximum of a number of samples of various distributions.
In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails. The logistic distribution is a special case of the Tukey lambda distribution.
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations.
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics.
In statistics, a Q–Q plot (quantile–quantile plot) is a probability plot, a graphical method for comparing two probability distributions by plotting their quantiles against each other. A point (x, y) on the plot corresponds to one of the quantiles of the second distribution (y-coordinate) plotted against the same quantile of the first distribution (x-coordinate). This defines a parametric curve where the parameter is the index of the quantile interval.
In statistics, binomial regression is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.
In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).
In probability and statistics, the quantile function outputs the value of a random variable such that its probability is less than or equal to an input probability value. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function, percent-point function, inverse cumulative distribution function or inverse distribution function.
In probability and statistics, the log-logistic distribution is a continuous probability distribution for a non-negative random variable. It is used in survival analysis as a parametric model for events whose rate increases initially and decreases later, as, for example, mortality rate from cancer following diagnosis or treatment. It has also been used in hydrology to model stream flow and precipitation, in economics as a simple model of the distribution of wealth or income, and in networking to model the transmission times of data considering both the network and the software.
The Johnson's SU-distribution is a four-parameter family of probability distributions first investigated by N. L. Johnson in 1949. Johnson proposed it as a transformation of the normal distribution:
The metalog distribution is a flexible continuous probability distribution designed for ease of use in practice. Together with its transforms, the metalog family of continuous distributions is unique because it embodies all of following properties: virtually unlimited shape flexibility; a choice among unbounded, semi-bounded, and bounded distributions; ease of fitting to data with linear least squares; simple, closed-form quantile function equations that facilitate simulation; a simple, closed-form PDF; and Bayesian updating in closed form in light of new data. Moreover, like a Taylor series, metalog distributions may have any number of terms, depending on the degree of shape flexibility desired and other application needs.