Order statistic

Last updated
Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter Order Statistics Exponential PDF.svg
Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. [1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Contents

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 7,

the order statistics would be denoted

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

The sample range is the difference between the maximum and minimum. It is a function of the order statistics:

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number n of observations is odd. More precisely, if n = 2m+1 for some integer m, then the sample median is and so is an order statistic. On the other hand, when n is even, n = 2m and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

Given any random variables X1, X2, ..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.

When the random variables X1, X2, ..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Cumulative distribution function of order statistics

For a random sample as above, with cumulative distribution , the order statistics for that sample have cumulative distributions as follows [2] (where r specifies which order statistic):

the corresponding probability density function may be derived from this result, and is found to be

Moreover, there are two special cases, which have CDFs that are easy to compute.

Which can be derived by careful consideration of probabilities.

Probability distributions of order statistics

Order statistics sampled from a uniform distribution

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that is a random sample drawn from a continuous distribution with cdf . Denoting we obtain the corresponding random sample from the standard uniform distribution. Note that the order statistics also satisfy .

The probability density function of the order statistic is equal to [3]

that is, the kth order statistic of the uniform distribution is a beta-distributed random variable. [3] [4]

The proof of these statements is as follows. For to be between u and u + du, it is necessary that exactly k  1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already , so we have to calculate the probability that exactly k  1, 1 and n  k observations fall in the intervals , and respectively. This equals (refer to multinomial distribution for details)

and the result follows.

The mean of this distribution is k / (n + 1).

The joint distribution of the order statistics of the uniform distribution

Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be

which is (up to terms of higher order than ) the probability that i  1, 1, j  1  i, 1 and n  j sample elements fall in the intervals , , , , respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region . It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding is bounded above by , which is thus invariant on the set of all with constant product .

Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of , i.e. maximum minus the minimum. More generally, for , also has a beta distribution: From these formulas we can derive the covariance between two order statistics:The formula follows from noting that and comparing that with where , which is the actual distribution of the difference.

Order statistics sampled from an exponential distribution

For a random sample of size n from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution

where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi. [5] [6]

Order statistics sampled from an Erlang distribution

The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method [ clarification needed ]. [7]

The joint distribution of the order statistics of an absolutely continuous distribution

If FX is absolutely continuous, it has a density such that , and we can use the substitutions

and

to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:

where
where

Application: confidence intervals for quantiles

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

A small-sample-size example

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is [ clarification needed ]

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

Large sample sizes

For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by

For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:

where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946. [8] Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in relative entropy or KL divergence. [9]

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.

Proof

It can be shown that

where

with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.

Application: Non-parametric density estimation

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator. [10] Suppose, we want to estimate the density at the point . Consider the random variables , which are i.i.d with distribution function . In particular, .

The expected value of the first order statistic given a sample of total observations yields,

where is the quantile function associated with the distribution , and . This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,

  Input: A sample of  observations.  points of density evaluation. Tuning parameter  (usually 1/3).   Output:  estimated density at the points of evaluation.
  1: Set    2: Set    3: Create an  matrix  which holds  subsets with  observations each.   4: Create a vector  to hold the density evaluations.   5: fordo   6:     fordo   7:         Find the nearest distance  to the current point  within the th subset   8:      end for   9:      Compute the subset average of distances to   10:      Compute the density estimate at   11:  end for  12: return

In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true. [11]

Dealing with discrete variables

Suppose are i.i.d. random variables from a discrete distribution with cumulative distribution function and probability mass function . To find the probabilities of the order statistics, three values are first needed, namely

The cumulative distribution function of the order statistic can be computed by noting that

Similarly, is given by

Note that the probability mass function of is just the difference of these values, that is to say

Computing order statistics

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).

See also

Examples of order statistics

Applications

Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields.

For example, a recent application in actuarial risk can be found in, [12] where some weighted premium principles in terms of record claims and kth record claims are provided.

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Median</span> Middle quantile of a data set or probability distribution

The median of a set of numbers is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as the “middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance in robust statistics.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Probability density function</span> Concept in mathematics

In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

<span class="mw-page-title-main">Student's t-distribution</span> Probability distribution

In probability theory and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and a rate parameter
<span class="mw-page-title-main">Gumbel distribution</span> Particular case of the generalized extreme value distribution

In probability theory and statistics, the Gumbel distribution is used to model the distribution of the maximum of a number of samples of various distributions.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided dice rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.

The Wilcoxon signed-rank test is a non-parametric rank test for statistical hypothesis testing used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples. The one-sample version serves a purpose similar to that of the one-sample Student's t-test. For two matched samples, it is a paired difference test like the paired Student's t-test. The Wilcoxon test is a good alternative to the t-test when the normal distribution of the differences between paired individuals cannot be assumed. Instead, it assumes a weaker hypothesis that the distribution of this difference is symmetric around a central value and it aims to test whether this center value differs significantly from zero. The Wilcoxon test is a more powerful alternative to the sign test because it considers the magnitude of the differences, but it requires this moderately strong assumption of symmetry.

In statistics, an exchangeable sequence of random variables is a sequence X1X2X3, ... whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. In other words, the joint distribution is invariant to finite permutation. Thus, for example the sequences

Bootstrapping is a procedure for estimating the distribution of an estimator by resampling one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.

References

  1. David, H. A.; Nagaraja, H. N. (2003). Order Statistics. Wiley Series in Probability and Statistics. doi:10.1002/0471722162. ISBN   9780471722168.
  2. Casella, George; Berger, Roger (2002). Statistical Inference (2nd ed.). Cengage Learning. p. 229. ISBN   9788131503942.
  3. 1 2 Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN   9780387981444 .
  4. Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m 'th order statistic from a random sample of size n from the uniform distribution (on (0,1)).
  5. David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory", Order Statistics, Wiley Series in Probability and Statistics, p. 9, doi:10.1002/0471722162.ch2, ISBN   9780471722168
  6. Rényi, Alfréd (1953). "On the theory of order statistics". Acta Mathematica Hungarica . 4 (3): 191–231. doi: 10.1007/BF02127580 .
  7. Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables". Statistics & Probability Letters. 80: 9–18. doi:10.1016/j.spl.2009.09.006.
  8. Mosteller, Frederick (1946). "On Some Useful "Inefficient" Statistics". Annals of Mathematical Statistics . 17 (4): 377–408. doi: 10.1214/aoms/1177730881 . Retrieved February 26, 2015.
  9. M. Cardone, A. Dytso and C. Rush, "Entropic Central Limit Theorem for Order Statistics," in IEEE Transactions on Information Theory, vol. 69, no. 4, pp. 2193-2205, April 2023, doi: 10.1109/TIT.2022.3219344.
  10. Garg, Vikram V.; Tenorio, Luis; Willcox, Karen (2017). "Minimum local distance density estimation". Communications in Statistics - Theory and Methods. 46 (1): 148–164. arXiv: 1412.2851 . doi:10.1080/03610926.2014.988260. S2CID   14334678.
  11. David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments", Order Statistics, Wiley Series in Probability and Statistics, p. 34, doi:10.1002/0471722162.ch3, ISBN   9780471722168
  12. Castaño-Martínez, A.; López-Blázquez, F.; Pigueiras, G.; Sordo, M.A. (2020). "A method for constructing and interpreting some weighted premium principles". ASTIN Bulletin. 50 (3): 1037–1064. doi:10.1017/asb.2020.15.