# Order statistic

Last updated

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. [1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

## Contents

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

## Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 8,

the order statistics would be denoted

${\displaystyle x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\,}$

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

${\displaystyle X_{(1)}=\min\{\,X_{1},\ldots ,X_{n}\,\}}$

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

${\displaystyle X_{(n)}=\max\{\,X_{1},\ldots ,X_{n}\,\}.}$

The sample range is the difference between the maximum and minimum. It is a function of the order statistics:

${\displaystyle {\rm {Range}}\{\,X_{1},\ldots ,X_{n}\,\}=X_{(n)}-X_{(1)}.}$

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number n of observations is odd. More precisely, if n = 2m+1 for some integer m, then the sample median is ${\displaystyle X_{(m+1)}}$ and so is an order statistic. On the other hand, when n is even, n = 2m and there are two middle values, ${\displaystyle X_{(m)}}$ and ${\displaystyle X_{(m+1)}}$, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

## Probabilistic analysis

Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.

When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

### Cumulative distribution function of order statistics

For a random sample as above, with cumulative distribution ${\displaystyle F_{X}(x)}$, the order statistics for that sample have cumulative distributions as follows [2] (where r specifies which order statistic):

${\displaystyle F_{X_{(r)}}(x)=\sum _{j=r}^{n}{\binom {n}{j}}[F_{X}(x)]^{j}[1-F_{X}(x)]^{n-j}}$

the corresponding probability density function may be derived from this result, and is found to be

${\displaystyle f_{X_{(r)}}(x)={\frac {n!}{(r-1)!(n-r)!}}f_{X}(x)[F_{X}(x)]^{r-1}[1-F_{X}(x)]^{n-r}.}$

Moreover, there are two special cases, which have CDFs that are easy to compute.

${\displaystyle F_{X_{(n)}}(x)=\operatorname {Prob} (\max\{\,X_{1},\ldots ,X_{n}\,\}\leq x)=[F_{X}(x)]^{n}}$
${\displaystyle F_{X_{(1)}}(x)=\operatorname {Prob} (\min\{\,X_{1},\ldots ,X_{n}\,\}\leq x)=1-[1-F_{X}(x)]^{n}}$

Which can be derived by careful consideration of probabilities.

### Probability distributions of order statistics

#### Order statistics sampled from an uniform distribution

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ is a random sample drawn from a continuous distribution with cdf ${\displaystyle F_{X}}$. Denoting ${\displaystyle U_{i}=F_{X}(X_{i})}$ we obtain the corresponding random sample ${\displaystyle U_{1},\ldots ,U_{n}}$ from the standard uniform distribution. Note that the order statistics also satisfy ${\displaystyle U_{(i)}=F_{X}(X_{(i)})}$.

The probability density function of the order statistic ${\displaystyle U_{(k)}}$ is equal to [3]

${\displaystyle f_{U_{(k)}}(u)={n! \over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}}$

that is, the kth order statistic of the uniform distribution is a beta-distributed random variable. [3] [4]

${\displaystyle U_{(k)}\sim \operatorname {Beta} (k,n+1\mathbf {-} k).}$

The proof of these statements is as follows. For ${\displaystyle U_{(k)}}$ to be between u and u + du, it is necessary that exactly k  1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already ${\displaystyle O(du^{2})}$, so we have to calculate the probability that exactly k  1, 1 and n  k observations fall in the intervals ${\displaystyle (0,u)}$, ${\displaystyle (u,u+du)}$ and ${\displaystyle (u+du,1)}$ respectively. This equals (refer to multinomial distribution for details)

${\displaystyle {n! \over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot (1-u-du)^{n-k}}$

and the result follows.

The mean of this distribution is k / (n + 1).

#### The joint distribution of the order statistics of the uniform distribution

Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be

${\displaystyle f_{U_{(i)},U_{(j)}}(u,v)=n!{u^{i-1} \over (i-1)!}{(v-u)^{j-i-1} \over (j-i-1)!}{(1-v)^{n-j} \over (n-j)!}}$

which is (up to terms of higher order than ${\displaystyle O(du\,dv)}$) the probability that i  1, 1, j  1  i, 1 and n  j sample elements fall in the intervals ${\displaystyle (0,u)}$, ${\displaystyle (u,u+du)}$, ${\displaystyle (u+du,v)}$, ${\displaystyle (v,v+dv)}$, ${\displaystyle (v+dv,1)}$ respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

${\displaystyle f_{U_{(1)},U_{(2)},\ldots ,U_{(n)}}(u_{1},u_{2},\ldots ,u_{n})=n!.}$

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region ${\displaystyle 0. It is also related with another particularity of order statistics ofuniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can can choose from a sample of size n with a sum up not exceeding ${\displaystyle 0 is bounded above by ${\displaystyle {\sqrt {2sn}}}$, which is thus invariant on the set of all ${\displaystyle s,n}$ with constant product ${\displaystyle sn}$.

Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of ${\displaystyle U_{(n)}-U_{(1)}}$, i.e. maximum minus the minimum. More generally, for ${\displaystyle n\geq k>j\geq 1}$, ${\displaystyle U_{(k)}-U_{(j)}}$ also has a beta distribution:

${\displaystyle U_{(k)}-U_{(j)}\sim \operatorname {Beta} (k-j,n-(k-j)+1)}$

From these formulas we can derive the covariance between two order statistics:

${\displaystyle \operatorname {Cov} (U_{(k)},U_{(j)})={\frac {j(n-k+1)}{(n+1)^{2}(n+2)}}}$

The formula follows from noting that

${\displaystyle \operatorname {Var} (U_{(k)}-U_{(j)})=\operatorname {Var} (U_{(k)})+\operatorname {Var} (U_{(j)})-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})={\frac {k(n-k+1)}{(n+1)^{2}(n+2)}}+{\frac {j(n-j+1)}{(n+1)^{2}(n+2)}}-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})}$

and comparing that with

${\displaystyle \operatorname {Var} (U)={\frac {(k-j)(n-(k-j)+1)}{(n+1)^{2}(n+2)}}}$

where ${\displaystyle U\sim \operatorname {Beta} (k-j,n-(k-j)+1)}$, which is the actual distribution of the difference.

#### Order statistics sampled from an exponential distribution

For ${\displaystyle X_{1},X_{2},..,X_{n}}$ random samples from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution

${\displaystyle X_{(i)}{\stackrel {d}{=}}{\frac {1}{\lambda }}\left(\sum _{j=1}^{i}{\frac {Z_{j}}{n-j+1}}\right)}$

where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi. [5] [6]

#### Order statistics sampled from an Erlang distribution

The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method [ clarification needed ]. [7]

#### The joint distribution of the order statistics of an absolutely continuous distribution

If FX is absolutely continuous, it has a density such that ${\displaystyle dF_{X}(x)=f_{X}(x)\,dx}$, and we can use the substitutions

${\displaystyle u=F_{X}(x)}$

and

${\displaystyle du=f_{X}(x)\,dx}$

to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:

${\displaystyle f_{X_{(k)}}(x)={\frac {n!}{(k-1)!(n-k)!}}[F_{X}(x)]^{k-1}[1-F_{X}(x)]^{n-k}f_{X}(x)}$
${\displaystyle f_{X_{(j)},X_{(k)}}(x,y)={\frac {n!}{(j-1)!(k-j-1)!(n-k)!}}[F_{X}(x)]^{j-1}[F_{X}(y)-F_{X}(x)]^{k-1-j}[1-F_{X}(y)]^{n-k}f_{X}(x)f_{X}(y)}$ where ${\displaystyle x\leq y}$
${\displaystyle f_{X_{(1)},\ldots ,X_{(n)}}(x_{1},\ldots ,x_{n})=n!f_{X}(x_{1})\cdots f_{X}(x_{n})}$ where ${\displaystyle x_{1}\leq x_{2}\leq \dots \leq x_{n}.}$

## Application: confidence intervals for quantiles

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

### A small-sample-size example

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is [ clarification needed ]

${\displaystyle {6 \choose 3}(1/2)^{6}={5 \over 16}\approx 31\%.}$

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

${\displaystyle \left[{6 \choose 2}+{6 \choose 3}+{6 \choose 4}\right](1/2)^{6}={25 \over 32}\approx 78\%.}$

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

### Large sample sizes

For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by

${\displaystyle U_{(\lceil np\rceil )}\sim AN\left(p,{\frac {p(1-p)}{n}}\right).}$

For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:

${\displaystyle X_{(\lceil np\rceil )}\sim AN\left(F^{-1}(p),{\frac {p(1-p)}{n[f(F^{-1}(p))]^{2}}}\right)}$

where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946. [8] Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds.

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.

#### Proof

It can be shown that

${\displaystyle B(k,n+1-k)\ {\stackrel {\mathrm {d} }{=}}\ {\frac {X}{X+Y}},}$

where

${\displaystyle X=\sum _{i=1}^{k}Z_{i},\quad Y=\sum _{i=k+1}^{n+1}Z_{i},}$

with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.

## Application: Non-parametric density estimation

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator. [9] Suppose, we want to estimate the density ${\displaystyle f_{X}}$ at the point ${\displaystyle x^{*}}$. Consider the random variables ${\displaystyle Y_{i}=|X_{i}-x^{*}|}$, which are i.i.d with distribution function ${\displaystyle g_{Y}(y)=f_{X}(y+x^{*})+f_{X}(x^{*}-y)}$. In particular, ${\displaystyle f_{X}(x^{*})={\frac {g_{Y}(0)}{2}}}$.

The expected value of the first order statistic ${\displaystyle Y_{(1)}}$ given ${\displaystyle N}$ total samples yields,

${\displaystyle E(Y_{(1)})={\frac {1}{(N+1)g(0)}}+{\frac {1}{(N+1)(N+2)}}\int _{0}^{1}Q''(z)\delta _{N+1}(z)\,dz}$

where ${\displaystyle Q}$ is the quantile function associated with the distribution ${\displaystyle g_{Y}}$, and ${\displaystyle \delta _{N}(z)=(N+1)(1-z)^{N}}$. This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,

  Input: ${\displaystyle N}$ samples. ${\displaystyle \{x_{\ell }\}_{\ell =1}^{M}}$ points of density evaluation. Tuning parameter ${\displaystyle a\in (0,1)}$ (usually 1/3).   Output: ${\displaystyle \{{\hat {f}}_{\ell }\}_{\ell =1}^{M}}$ estimated density at the points of evaluation.
  1: Set ${\displaystyle m_{N}=\operatorname {round} (N^{1-a})}$   2: Set ${\displaystyle s_{N}={\frac {N}{m_{N}}}}$   3: Create an ${\displaystyle s_{N}\times m_{N}}$ matrix ${\displaystyle M_{ij}}$ which holds ${\displaystyle m_{N}}$ subsets with ${\displaystyle s_{N}}$ samples each.   4: Create a vector ${\displaystyle {\hat {f}}}$ to hold the density evaluations.   5: for${\displaystyle \ell =1\to M}$do   6:     for${\displaystyle k=1\to m_{N}}$do   7:         Find the nearest distance ${\displaystyle d_{\ell k}}$ to the current point ${\displaystyle x_{\ell }}$ within the ${\displaystyle k}$th subset   8:      end for   9:      Compute the subset average of distances to ${\displaystyle x_{\ell }:d_{\ell }=\sum _{k=1}^{m_{N}}{\frac {d_{\ell k}}{m_{N}}}}$  10:      Compute the density estimate at ${\displaystyle x_{\ell }:{\hat {f}}_{\ell }={\frac {1}{2(1+s_{N})d_{\ell }}}}$  11:  end for  12: return${\displaystyle {\hat {f}}}$

In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true. [10]

## Dealing with discrete variables

Suppose ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ are i.i.d. random variables from a discrete distribution with cumulative distribution function ${\displaystyle F(x)}$ and probability mass function ${\displaystyle f(x)}$. To find the probabilities of the ${\displaystyle k^{\text{th}}}$ order statistics, three values are first needed, namely

${\displaystyle p_{1}=P(Xx)=1-F(x).}$

The cumulative distribution function of the ${\displaystyle k^{\text{th}}}$ order statistic can be computed by noting that

{\displaystyle {\begin{aligned}P(X_{(k)}\leq x)&=P({\text{there are at least }}k{\text{ observations less than or equal to }}x),\\&=P({\text{there are at most }}n-k{\text{ observations greater than }}x),\\&=\sum _{j=0}^{n-k}{n \choose j}p_{3}^{j}(p_{1}+p_{2})^{n-j}.\end{aligned}}}

Similarly, ${\displaystyle P(X_{(k)} is given by

{\displaystyle {\begin{aligned}P(X_{(k)}

Note that the probability mass function of ${\displaystyle X_{(k)}}$ is just the difference of these values, that is to say

{\displaystyle {\begin{aligned}P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}

## Computing order statistics

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).

## Related Research Articles

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed.

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would be close to that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

The likelihood function describes the joint probability of the observed data as a function of the parameters of the chosen statistical model. For each specific parameter value in the parameter space, the likelihood function therefore assigns a probabilistic prediction to the observed data . Since it is essentially the product of sampling densities, the likelihood generally encapsulates both the data-generating process as well as the missing-data mechanism that produced the observed sample.

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In probability theory and statistics, the chi-squared distribution with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by alpha (α) and beta (β), that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, a and b, which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated U, where U stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples. The one-sample version serves a purpose similar to that of the one-sample Student's t-test. For two matched samples, it is a paired difference test like the paired Student's t-test. The Wilcoxon test can be a good alternative to the t-test when population means are not of interest; for example, when one wishes to test whether a population's median is nonzero, or whether there is a better than 50% chance that a sample from one population is greater than a sample from another population.

In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).

In statistics, an exchangeable sequence of random variables is a sequence X1X2X3, ... whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. Thus, for example the sequences

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area or volume.

## References

1. David, H. A.; Nagaraja, H. N. (2003). Order Statistics. Wiley Series in Probability and Statistics. doi:10.1002/0471722162. ISBN   9780471722168.
2. Casella, George; Berger, Roger. Statistical Inference (2nd ed.). Cengage Learning. p. 229. ISBN   9788131503942.
3. Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN   9780387981444 .
4. Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m’th order statistic from a random sample of size n from the uniform distribution (on (0,1)).
5. David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory", Order Statistics, Wiley Series in Probability and Statistics, p. 9, doi:10.1002/0471722162.ch2, ISBN   9780471722168
6. Rényi, Alfréd (1953). "On the theory of order statistics". Acta Mathematica Hungarica . 4 (3): 191–231. doi:.
7. Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables". Statistics & Probability Letters. 80: 9–18. doi:10.1016/j.spl.2009.09.006.
8. Mosteller, Frederick (1946). "On Some Useful "Inefficient" Statistics". Annals of Mathematical Statistics . 17 (4): 377–408. doi:. Retrieved February 26, 2015.
9. Garg, Vikram V.; Tenorio, Luis; Willcox, Karen (2017). "Minimum local distance density estimation". Communications in Statistics - Theory and Methods. 46 (1): 148–164. arXiv:. doi:10.1080/03610926.2014.988260.
10. David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments", Order Statistics, Wiley Series in Probability and Statistics, p. 34, doi:10.1002/0471722162.ch3, ISBN   9780471722168