In statistics and probability, **quantiles** are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one fewer quantile than the number of groups created. Common quantiles have special names, such as quartiles (four groups), deciles (ten groups), and percentiles (100 groups). The groups created are termed halves, thirds, quarters, etc., though sometimes the terms for the quantile are used for the groups created, rather than for the cut points.

- Specialized quantiles
- Quantiles of a population
- Examples
- Estimating quantiles from a sample
- Approximate quantiles from a stream
- Discussion
- See also
- References
- Further reading
- External links

*q*-**quantiles** are values that partition a finite set of values into *q* subsets of (nearly) equal sizes. There are *q* − 1 of the *q*-quantiles, one for each integer *k* satisfying 0 < *k* < *q*. In some cases the value of a quantile may not be uniquely determined, as can be the case for the median (2-quantile) of a uniform probability distribution on a set of even size. Quantiles can also be applied to continuous distributions, providing a way to generalize rank statistics to continuous variables (see percentile rank). When the cumulative distribution function of a random variable is known, the *q*-quantiles are the application of the quantile function (the inverse function of the cumulative distribution function) to the values {1/*q*, 2/*q*, …, (*q* − 1)/*q*}.

Some *q*-quantiles have special names:^{[ citation needed ]}

- The only 2-quantile is called the median
- The 3-quantiles are called tertiles or terciles → T
- The 4-quantiles are called quartiles → Q; the difference between upper and lower quartiles is also called the interquartile range,
**midspread**or**middle fifty**→ IQR =*Q*_{3}−*Q*_{1} - The 5-quantiles are called quintiles → QU
- The 6-quantiles are called sextiles → S
- The 7-quantiles are called septiles
- The 8-quantiles are called octiles
- The 10-quantiles are called deciles → D
- The 12-quantiles are called duo-deciles or dodeciles
- The 16-quantiles are called hexadeciles → H
- The 20-quantiles are called ventiles, vigintiles, or demi-deciles → V
- The 100-quantiles are called percentiles → P
- The 1000-quantiles have been called permilles or milliles, but these are rare and largely obsolete
^{ [1] }

As in the computation of, for example, standard deviation, the estimation of a quantile depends upon whether one is operating with a statistical population or with a sample drawn from it. For a population, of discrete values or for a continuous population density, the *k*-th *q*-quantile is the data value where the cumulative distribution function crosses *k*/*q*. That is, *x* is a *k*-th *q*-quantile for a variable *X* if

- Pr[
*X*<*x*] ≤*k*/*q*or, equivalently, Pr[*X*≥*x*] ≥ 1 −*k*/*q*

and

- Pr[
*X*≤*x*] ≥*k*/*q*.

For a finite population of *N* equally probable values indexed 1, …, *N* from lowest to highest, the *k*-th *q*-quantile of this population can equivalently be computed via the value of *I*_{p} = *N**k*/*q*. If *I*_{p} is not an integer, then round up to the next integer to get the appropriate index; the corresponding data value is the *k*-th *q*-quantile. On the other hand, if *I*_{p} is an integer then any number from the data value at that index to the data value of the next can be taken as the quantile, and it is conventional (though arbitrary) to take the average of those two values (see Estimating quantiles from a sample).

If, instead of using integers *k* and *q*, the “*p*-quantile” is based on a real number *p* with 0 < *p* < 1 then *p* replaces *k*/*q* in the above formulas. Some software programs (including Microsoft Excel) regard the minimum and maximum as the 0th and 100th percentile, respectively; however, such terminology is an extension beyond traditional statistics definitions.

The following two examples use the Nearest Rank definition of quantile with rounding. For an explanation of this definition, see percentiles.

Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 4-quantiles (the "quartiles") of this dataset?

Quartile | Calculation | Result |
---|---|---|

Zeroth quartile | Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. | 3 |

First quartile | The rank of the first quartile is 10×(1/4) = 2.5, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. | 7 |

Second quartile | The rank of the second quartile (same as the median) is 10×(2/4) = 5, which is an integer, while the number of values (10) is an even number, so the average of both the fifth and sixth values is taken—that is (8+10)/2 = 9, though any value from 8 through to 10 could be taken to be the median. | 9 |

Third quartile | The rank of the third quartile is 10×(3/4) = 7.5, which rounds up to 8. The eighth value in the population is 15. | 15 |

Fourth quartile | Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 10. | 20 |

So the first, second and third 4-quantiles (the "quartiles") of the dataset {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20.

Consider an ordered population of 11 data values {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20}. What are the 4-quantiles (the "quartiles") of this dataset?

Quartile | Calculation | Result |
---|---|---|

Zeroth quartile | Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. | 3 |

First quartile | The first quartile is determined by 11×(1/4) = 2.75, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. | 7 |

Second quartile | The second quartile value (same as the median) is determined by 11×(2/4) = 5.5, which rounds up to 6. Therefore, 6 is the rank in the population (from least to greatest values) at which approximately 2/4 of the values are less than the value of the second quartile (or median). The sixth value in the population is 9. | 9 |

Third quartile | The third quartile value for the original example above is determined by 11×(3/4) = 8.25, which rounds up to 9. The ninth value in the population is 15. | 15 |

Fourth quartile | Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 11. | 20 |

So the first, second and third 4-quantiles (the "quartiles") of the dataset {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20.

The asymptotic distribution of -th sample quantile is well-known: it is asymptotically normal around the -th quantile with variance equal to

where is the value of the distribution density at the -th quantile.^{ [2] } However, this distribution relies on knowledge of the population distribution; which is equivalent to knowledge of the population quantiles, which we are trying to estimate! Modern statistical packages thus rely on a different technique — or selection of techniques — to estimate the quantiles.

Hyndman and Fan compiled a taxonomy of nine algorithms^{ [3] } used by various software packages. All methods compute Q_{p}, the estimate for the p-quantile (the k-th q-quantile, where *p* = *k*/*q*) from a sample of size N by computing a real valued index h. When h is an integer, the h-th smallest of the N values, x_{h}, is the quantile estimate. Otherwise a rounding or interpolation scheme is used to compute the quantile estimate from h, *x*_{⌊h⌋}, and *x*_{⌈h⌉}. (For notation, see floor and ceiling functions).

The first three are piecewise constant, changing abruptly at each data point, while the last five use linear interpolation between data points, and differ only in how the index h used to choose the point along the piecewise linear interpolation curve, is chosen.

Mathematica,^{ [4] } Matlab,^{ [5] } R ^{ [6] } and GNU Octave ^{ [7] } programming languages support all nine sample quantile methods. SAS includes five sample quantile methods, SciPy ^{ [8] } and Maple ^{ [9] } both include eight, EViews ^{ [10] } includes the six piecewise linear functions, Stata ^{ [11] } includes two, Python ^{ [12] } includes two, and Microsoft Excel includes two. Mathematica and SciPy support arbitrary parameters for methods which allow for other, non-standard, methods.

The estimate types and interpolation schemes used include:

Type | h | Q_{p} | Notes |
---|---|---|---|

R‑1, SAS‑3, Maple‑1 | Np + 1/2 | x_{⌈h – 1/2⌉} | Inverse of empirical distribution function. |

R‑2, SAS‑5, Maple‑2, Stata | Np + 1/2 | (x_{⌈h – 1/2⌉} + x_{⌊h + 1/2⌋}) / 2 | The same as R-1, but with averaging at discontinuities. |

R‑3, SAS‑2 | Np | x_{⌊h⌉} | The observation numbered closest to Np. Here, ⌊h⌉ indicates rounding to the nearest integer, choosing the even integer in the case of a tie. |

R‑4, SAS‑1, SciPy‑(0,1), Maple‑3 | Np | x_{⌊h⌋} + (h − ⌊h⌋) (x_{⌊h⌋ + 1} − x_{⌊h⌋}) | Linear interpolation of the empirical distribution function. |

R‑5, SciPy‑(1/2,1/2), Maple‑4 | Np + 1/2 | Piecewise linear function where the knots are the values midway through the steps of the empirical distribution function. | |

R‑6, Excel, Python, SAS‑4, SciPy‑(0,0), Maple‑5, Stata‑altdef | (N + 1)p | Linear interpolation of the expectations for the order statistics for the uniform distribution on [0,1]. That is, it is the linear interpolation between points (p_{h}, x_{h}), where p_{h} = h/(N+1) is the probability that the last of (N+1) randomly drawn values will not exceed the h-th smallest of the first N randomly drawn values. | |

R‑7, Excel, Python, SciPy‑(1,1), Maple‑6, NumPy, Julia | (N − 1)p + 1 | Linear interpolation of the modes for the order statistics for the uniform distribution on [0,1]. | |

R‑8, SciPy‑(1/3,1/3), Maple‑7 | (N + 1/3)p + 1/3 | Linear interpolation of the approximate medians for order statistics. | |

R‑9, SciPy‑(3/8,3/8), Maple‑8 | (N + 1/4)p + 3/8 | The resulting quantile estimates are approximately unbiased for the expected order statistics if x is normally distributed. |

Notes:

- R‑1 through R‑3 are piecewise constant, with discontinuities.
- R‑4 and following are piecewise linear, without discontinuities, but differ in how h is computed.
- R‑3 and R‑4 are not symmetric in that they do not give
*h*= (*N*+ 1) / 2 when*p*= 1/2. - Excel's PERCENTILE.EXC and Python's default "exclusive" method are equivalent to R‑6.
- Excel's PERCENTILE and PERCENTILE.INC and Python's optional "inclusive" method are equivalent to R‑7. This is R's default method.
- Packages differ in how they estimate quantiles beyond the lowest and highest values in the sample, i.e.
*p*< 1/*N*and*p*> (*N*−1)/*N*. Choices include returning an error value, computing linear extrapolation, or assuming a constant value.

Of the techniques, Hyndman and Fan recommend R-8, but R-7 has become the standard default technique in most statistical software packages.^{ [13] }

The standard error of a quantile estimate can in general be estimated via the bootstrap. The Maritz–Jarrett method can also be used.^{ [14] }

Computing approximate quantiles from data arriving from a stream can be done efficiently using compressed data structures. The most popular methods are t-digest^{ [15] } and KLL.^{ [16] } These methods read a stream of values in a continuous fashion and can, at any time, be queried about the approximate value of a specified quantile.

Both algorithms are based on a similar idea: compressing the stream of values by summarizing identical or similar values with a weight. If the stream is made of a repetition of 100 times v1 and 100 times v2, there is no reason to keep a sorted list of 200 elements, it is enough to keep two elements and two counts to be able to recover the quantiles. With more values, these algorithms maintain a trade-off between the number of unique values stored and the precision of the resulting quantiles. Some values may be discarded from the stream and contribute to the weight of a nearby value without changing the quantile results too much. t-digest uses an approach based on k-means clustering to group similar values whereas KLL uses a more sophisticated "compactor" method that leads to better control of the error bounds.

Both methods belong to the family of *data sketches* that are subsets of Streaming Algorithms with useful properties: t-digest or KLL sketches can be combined. Computing the sketch for a very large vector of values can be split into trivially parallel processes where sketches are computed for partitions of the vector in parallel and merged later.

Standardized test results are commonly reported as a student scoring "in the 80th percentile", for example. This uses an alternative meaning of the word percentile as the *interval* between (in this case) the 80th and the 81st scalar percentile.^{ [17] } This separate meaning of percentile is also used in peer-reviewed scientific research articles.^{ [18] } The meaning used can be derived from its context.

If a distribution is symmetric, then the median is the mean (so long as the latter exists). But, in general, the median and the mean can differ. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values but is zero for negative numbers.

Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics.

Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available.

The quantiles of a random variable are preserved under increasing transformations, in the sense that, for example, if m is the median of a random variable X, then 2^{m} is the median of 2^{X}, unless an arbitrary choice has been made from a range of values to specify a particular quantile. (See quantile estimation, above, for examples of such interpolation.) Quantiles can also be used in cases where only ordinal data are available.

- Flashsort – sort by first bucketing by quantile
- Interquartile range
- Descriptive statistics
- Quartile
- Q–Q plot
- Quantile function
- Quantile normalization
- Quantile regression
- Quantization
- Summary statistics
- Tolerance interval ("confidence intervals for the
*p*th quantile"^{ [19] })

In descriptive statistics, the **interquartile range** (**IQR**), also called the **midspread**, **middle 50%**, or **H‑spread**, is a measure of statistical dispersion, being equal to the difference between 75th and 25th percentiles, or between upper and lower quartiles, IQR = *Q*_{3} − *Q*_{1}. In other words, the IQR is the first quartile subtracted from the third quartile; these quartiles can be clearly seen on a box plot on the data. It is a trimmed estimator, defined as the 25% trimmed range, and is a commonly used robust measure of scale.

In statistics and probability theory, a **median** is a value separating the higher half from the lower half of a data sample, a population or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic advantage of the median in describing data compared to the mean is that it is not skewed so much by a small proportion of extremely large or small values, and so it may give a better idea of a "typical" value. For example, in understanding statistics like household income or assets, which vary greatly, the mean may be skewed by a small number of extremely high or low values. Median income, for example, may be a better way to suggest what a "typical" income is. Because of this, the median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median will not give an arbitrarily large or small result.

In statistics, a **quartile** is a type of quantile which divides the number of data points into four parts, or *quarters*, of more-or-less equal size. The data must be ordered from smallest to largest to compute quartiles; as such, quartiles are a form of order statistic. The three main quartiles are as follows:

A **statistic** (singular) or **sample statistic** is any quantity computed from values in a sample that is used for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypothesis. The average of sample values is a statistic. The term statistic is used both for the function and for the value of the function on a given sample. When a statistic is being used for a specific purpose, it may be referred to by a name indicating its purpose.

In descriptive statistics, a **box plot** or **boxplot** is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending from the boxes (*whiskers*) indicating variability outside the upper and lower quartiles, hence the terms **box-and-whisker plot** and **box-and-whisker diagram**. Outliers may be plotted as individual points. Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution. The spacings between the different parts of the box indicate the degree of dispersion (spread) and skewness in the data, and show outliers. In addition to the points themselves, they allow one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically. Box plots received their name from the box in the middle.

The **five-number summary** is a set of descriptive statistics that provides information about a dataset. It consists of the five most important sample percentiles:

- the sample minimum
*(smallest observation)* - the lower quartile or
*first quartile* - the median
- the upper quartile or
*third quartile* - the sample maximum

In statistics, the *k*th **order statistic** of a statistical sample is equal to its *k*th-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

In statistics, a **percentile** is a score *below which* a given percentage of scores in its frequency distribution fall or a score *at or below which* a given percentage fall. For example, the 50th percentile is the score below which 50% (exclusive) or at or below which (inclusive) 50% of the scores in the distribution may be found.

In mathematics and statistics, a **piecewise linear**, **PL** or **segmented** function is a real-valued function of a real variable, whose graph is composed of straight-line segments.

In statistical inference, specifically predictive inference, a **prediction interval** is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

The **normal probability plot** is a graphical technique to identify substantive departures from normality. This includes identifying outliers, skewness, kurtosis, a need for transformations, and mixtures. Normal probability plots are made of raw data, residuals from model fits, and estimated parameters.

In descriptive statistics, a **decile** is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile and percentile. A decile rank arranges the data in order from lowest to highest and is done on a scale of one to ten where each successive number corresponds to an increase of 10 percentage points.

The following is a glossary of terms used in the mathematical sciences statistics and probability.

In statistics, an **empirical distribution function** is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/*n* at each of the *n* data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In statistics, a **Q–Q (quantile-quantile) plot** is a probability plot, which is a graphical method for comparing two probability distributions by plotting their quantiles against each other. First, the set of intervals for the quantiles is chosen. A point (*x*, *y*) on the plot corresponds to one of the quantiles of the second distribution plotted against the same quantile of the first distribution. Thus the line is a parametric curve with the parameter which is the number of the interval for the quantile.

**Bootstrapping** is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

In probability and statistics, the **quantile function**, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. It is also called the **percent-point function** or **inverse cumulative distribution function**.

**Quantile regression** is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares estimates the conditional *mean* of the response variable across values of the predictor variables, quantile regression estimates the conditional median of the response variable. Quantile regression is an extension of linear regression used when the conditions of linear regression are not met.

In descriptive statistics, the **seven-number summary** is a collection of seven summary statistics, and is an extension of the five-number summary. There are two similar, common forms.

In statistics, a **robust measure of scale** is a robust statistic that quantifies the statistical dispersion in a set of numerical data. The most common such statistics are the interquartile range (IQR) and the median absolute deviation (MAD). These are contrasted with conventional measures of scale, such as sample variance or sample standard deviation, which are non-robust, meaning greatly influenced by outliers.

- ↑ Helen Mary Walker, Joseph Lev,
*Elementary Statistical Methods*, 1969, [p. 60 https://books.google.com/books?id=ogYnAQAAIAAJ&dq=permille] - ↑ Stuart, Alan; Ord, Keith (1994).
*Kendall's Advanced Theory of Statistics*. London: Arnold. ISBN 0340614307. - ↑ Hyndman, Rob J.; Fan, Yanan (November 1996). "Sample Quantiles in Statistical Packages" (PDF).
*American Statistician*. American Statistical Association.**50**(4): 361–365. doi:10.2307/2684934. JSTOR 2684934. - ↑ Mathematica Documentation See 'Details' section
- ↑ "Quantile calculation".
*uk.mathworks.com*. - ↑ Frohne, Ivan; Hyndman, Rob J. (2009).
*Sample Quantiles*. R Project. ISBN 3-900051-07-0. - ↑ "Function Reference: quantile - Octave-Forge - SourceForge" . Retrieved 6 September 2013.
- ↑ "scipy.stats.mstats.mquantiles — SciPy v1.4.1 Reference Guide".
*docs.scipy.org*. - ↑ "Statistics - Maple Programming Help".
*www.maplesoft.com*. - ↑ "Archived copy". Archived from the original on April 16, 2016. Retrieved April 4, 2016.CS1 maint: archived copy as title (link)
- ↑ Stata documentation for the pctile and xtile commands See 'Methods and formulas' section.
- ↑ "statistics — Mathematical statistics functions — Python 3.8.3rc1 documentation".
*docs.python.org*. - ↑ Hyndman, Rob J. (28 March 2016). "Sample quantiles 20 years later".
*Hyndsignt blog*. Retrieved 2020-11-30. - ↑ Wilcox, Rand R. (2010).
*Introduction to Robust Estimation and Hypothesis Testing*. ISBN 0-12-751542-9. - ↑ Dunning, Ted; Ertl, Otmar (February 2019). "Computing Extremely Accurate Quantiles Using t-Digests". arXiv: 1902.04023 [stat.CO].
- ↑ Zohar Karnin, Kevin Lang, Edo Liberty (2016). "Optimal Quantile Approximation in Streams". arXiv: 1603.05346 [cs.DS].CS1 maint: uses authors parameter (link)
- ↑ "percentile".
*Oxford Reference*. doi:10.1093/oi/authority.20110803100316401 . Retrieved 2020-08-17. - ↑ Kruger, J.; Dunning, D. (December 1999). "Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments".
*Journal of Personality and Social Psychology*.**77**(6): 1121–1134. doi:10.1037//0022-3514.77.6.1121. ISSN 0022-3514. PMID 10626367. - ↑ Stephen B. Vardeman (1992). "What about the Other Intervals?".
*The American Statistician*.**46**(3): 193–197. doi:10.2307/2685212. JSTOR 2685212.

- Serfling, R. J. (1980).
*Approximation Theorems of Mathematical Statistics*. John Wiley & Sons. ISBN 0-471-02403-1.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.