Percentile

Last updated

A percentile (or a centile) is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations falls. For example, the 20th percentile is the value (or score) below which 20% of the observations may be found. Equivalently, 80% of the observations are found above the 20th percentile.

Contents

The term percentile and the related term percentile rank are often used in the reporting of scores from norm-referenced tests. For example, if a score is at the 86th percentile, where 86 is the percentile rank, it is equal to the value below which 86% of the observations may be found (carefully contrast with in the 86th percentile, which means the score is at or below the value below which 86% of the observations may be found—every score is in the 100th percentile).[ dubious ][ citation needed ] The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3). In general, percentiles and quartiles are specific types of quantiles.

Applications

When ISPs bill "burstable" internet bandwidth, the 95th or 98th percentile usually cuts off the top 5% or 2% of bandwidth peaks in each month, and then bills at the nearest rate. In this way, infrequent peaks are ignored, and the customer is charged in a fairer way. The reason this statistic is so useful in measuring data throughput is that it gives a very accurate picture of the cost of the bandwidth. The 95th percentile says that 95% of the time, the usage is below this amount: so, the remaining 5% of the time, the usage is above that amount. , Physicians will often use infant and children's weight and height to assess their growth in comparison to national averages and percentiles which are found in growth charts.

The 85th percentile speed of traffic on a road is often used as a guideline in setting speed limits and assessing whether such a limit is too high or low. [1] [2]

In finance, value at risk is a standard measure to assess (in a model-dependent way) the quantity under which the value of the portfolio is not expected to sink within a given period of time and given a confidence value.

The normal distribution and percentiles

Representation of the three-sigma rule. The dark blue zone represents observations within one standard deviation (s) to either side of the mean (m), which accounts for about 68.3% of the population. Two standard deviations from the mean (dark and medium blue) account for about 95.4%, and three standard deviations (dark, medium, and light blue) for about 99.7%. Standard deviation diagram.svg
Representation of the three-sigma rule. The dark blue zone represents observations within one standard deviation (σ) to either side of the mean (μ), which accounts for about 68.3% of the population. Two standard deviations from the mean (dark and medium blue) account for about 95.4%, and three standard deviations (dark, medium, and light blue) for about 99.7%.

The methods given in the definitions section (below) are approximations for use in small-sample statistics. In general terms, for very large populations following a normal distribution, percentiles may often be represented by reference to a normal curve plot. The normal distribution is plotted along an axis scaled to standard deviations, or sigma () units. Mathematically, the normal distribution extends to negative infinity on the left and positive infinity on the right. Note, however, that only a very small proportion of individuals in a population will fall outside the −3 to +3 range. For example, with human heights very few people are above the +3 height level.

Percentiles represent the area under the normal curve, increasing from left to right. Each standard deviation represents a fixed percentile. Thus, rounding to two decimal places, −3 is the 0.13th percentile, −2 the 2.28th percentile, −1 the 15.87th percentile, 0 the 50th percentile (both the mean and median of the distribution), +1 the 84.13th percentile, +2 the 97.72nd percentile, and +3 the 99.87th percentile. This is related to the 68–95–99.7 rule or the three-sigma rule. Note that in theory the 0th percentile falls at negative infinity and the 100th percentile at positive infinity, although in many practical applications, such as test results, natural lower and/or upper limits are enforced.

Definitions

There is no standard definition of percentile, [3] [4] [5] however all definitions yield similar results when the number of observations is very large and the probability distribution is continuous. [6] In the limit, as the sample size approaches infinity, the 100pth percentile (0<p<1) approximates the inverse of the cumulative distribution function (CDF) thus formed, evaluated at p, as p approximates the CDF. This can be seen as a consequence of the Glivenko–Cantelli theorem. Some methods for calculating the percentiles are given below.

The nearest-rank method

The percentile values for the ordered list {15, 20, 35, 40, 50} Percentile.png
The percentile values for the ordered list {15, 20, 35, 40, 50}

One definition of percentile, often given in texts, is that the P-th percentile of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula

Note the following:

Worked examples of the nearest-rank method

Example 1

Consider the ordered list {15, 20, 35, 40, 50}, which contains 5 data values. What are the 5th, 30th, 40th, 50th and 100th percentiles of this list using the nearest-rank method?

Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
5th5the first number in the ordered list, which is 151515 is the smallest element of the list; 0% of the data is strictly less than 15, and 20% of the data is less than or equal to 15.
30th5the 2nd number in the ordered list, which is 202020 is an element of the ordered list.
40th5the 2nd number in the ordered list, which is 2020In this example, it is the same as the 30th percentile.
50th5the 3rd number in the ordered list, which is 353535 is an element of the ordered list.
100th5the last number in the ordered list, which is 5050The 100th percentile is defined to be the largest value in the list, which is 50.

So the 5th, 30th, 40th, 50th and 100th percentiles of the ordered list {15, 20, 35, 40, 50} using the nearest-rank method are {15, 20, 20, 35, 50}.

Example 2

Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 25th, 50th, 75th and 100th percentiles of this list using the nearest-rank method?

Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
25th10the 3rd number in the ordered list, which is 777 is an element of the list.
50th10the 5th number in the ordered list, which is 888 is an element of the list.
75th10the 8th number in the ordered list, which is 151515 is an element of the list.
100th10Last20, which is the last number in the ordered list20The 100th percentile is defined to be the largest value in the list, which is 20.

So the 25th, 50th, 75th and 100th percentiles of the ordered list {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} using the nearest-rank method are {7, 8, 15, 20}.

Example 3

Consider an ordered population of 11 data values {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20}. What are the 25th, 50th, 75th and 100th percentiles of this list using the nearest-rank method?

Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
25th11the 3rd number in the ordered list, which is 777 is an element of the list.
50th11the 6th number in the ordered list, which is 999 is an element of the list.
75th11the 9th number in the ordered list, which is 151515 is an element of the list.
100th11Last20, which is the last number in the ordered list20The 100th percentile is defined to be the largest value in the list, which is 20.

So the 25th, 50th, 75th and 100th percentiles of the ordered list {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20} using the nearest-rank method are {7, 9, 15, 20}.

The linear interpolation between closest ranks method

An alternative to rounding used in many applications is to use linear interpolation between adjacent ranks.

Commonalities between the variants of this method

All of the following variants have the following in common. Given the order statistics

we seek a linear interpolation function that passes through the points . This is simply accomplished by

where uses the floor function to represent the integral part of positive , whereas uses the mod function to represent its fractional part (the remainder after division by 1). (Note that, though at the endpoint , is undefined, it does not need to be because it is multiplied by .) As we can see, is the continuous version of the subscript , linearly interpolating between adjacent nodes.

There are two ways in which the variant approaches differ. The first is in the linear relationship between the rank, the percent rank, and a constant that is a function of the sample size :

There is the additional requirement that the midpoint of the range , corresponding to the median, occur at :

and our revised function now has just one degree of freedom, looking like this:

The second way in which the variants differ is in the definition of the function near the margins of the range of : should produce, or be forced to produce, a result in the range , which may mean the absence of a one-to-one correspondence in the wider region. One author has suggested a choice of where is the shape of the Generalized extreme value distribution which is the extreme value limit of the sampled distribution [7] .

First variant,

The result of using each of the three variants on the ordered list {15, 20, 35, 40, 50} Percentile interpolation.png
The result of using each of the three variants on the ordered list {15, 20, 35, 40, 50}

(Sources: Matlab "prctile" function, [8] [9] )

where

Furthermore, let

The inverse relationship is restricted to a narrower region:

Worked example of the first variant

Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What are the 5th, 30th, 40th and 95th percentiles of this list using the Linear Interpolation Between Closest Ranks method? First, we calculate the percent rank for each list value.

List value
Position of that value
in the ordered list
Number of values
Calculation of
percent rank
Percent rank,

Notes
151510
202530
353550
404570
505590

Then we take those percent ranks and calculate the percentile values as follows:

Percent rank
Number of values
Is ?Is ?Is there a
percent rank
equal to ?
What do we use for percentile value?Percentile value

Notes
55YesNoNoWe see that P=5, which is less than the first percent rank p1=10, so use the first list value v1, which is 151515 is a member of the ordered list
305NoNoYesWe see that P=30 is the same as the second percent rank p2=30, so use the second list value v2, which is 202020 is a member of the ordered list
405NoNoNoWe see that P=40 is between percent rank p2=30 and p3=50, so we take k=2, k+1=3, P=40, pk=p2=30, vk=v2=20, vk+1=v3=35, N=5.
Given those values we can then calculate v as follows:
27.527.5 is not a member of the ordered list
955NoYesNoWe see that P=95, which is greater than the last percent rank pN=90, so use the last list value, which is 505050 is a member of the ordered list

So the 5th, 30th, 40th and 95th percentiles of the ordered list {15, 20, 35, 40, 50} using the Linear Interpolation Between Closest Ranks method are {15, 20, 27.5, 50}

Second variant,

(Source: Some software packages, including NumPy [10] and Microsoft Excel [5] (up to and including version 2013 by means of the PERCENTILE.INC function). Noted as an alternative by NIST [11] )

Note that the relationship is one-to-one for , the only one of the three variants with this property; hence the "INC" suffix, for inclusive, on the Excel function.

Worked examples of the second variant

Example 1:

Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using this variant method?

First we calculate the rank of the 40th percentile:

So, x=2.6, which gives us and . So, the value of the 40th percentile is

Example 2:

Consider the ordered list {1,2,3,4} which CONTAINS four data values. What is the 75th percentile of this list using the Microsoft Excel method?

First we calculate the rank of the 75th percentile as follows:

So, x=3.25, which gives us an integral part of 3 and a fractional part of 0.25. So, the value of the 75th percentile is

Third variant,

(The primary variant recommended by NIST. [11] Adopted by Microsoft Excel since 2010 by means of PERCENTIL.EXC function. However, as the "EXC" suffix indicates, the Excel version excludes both endpoints of the range of p, i.e., , whereas the "INC" version, the second variant, does not; in fact, any number smaller than 1/(N+1) is also excluded and would cause an error.)

The inverse is restricted to a narrower region:

Worked example of the third variant

Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?

First we calculate the rank of the 40th percentile as follows:

So x=2.4, which gives us and . So the value of the 40th percentile is calculated as:

So the value of the 40th percentile of the ordered list {15, 20, 35, 40, 50} using this variant method is 26.

The weighted percentile method

In addition to the percentile function, there is also a weighted percentile, where the percentage in the total weight is counted instead of the total number. There is no standard function for a weighted percentile. One method extends the above approach in a natural way.

Suppose we have positive weights associated, respectively, with our N sorted sample values. Let

the sum of the weights. Then the formulas above are generalized by taking

when ,

or

for general ,

and

The 50% weighted percentile is known as the weighted median.

See also

Related Research Articles

Binary search algorithm Search algorithm finding the position of a target value within a sorted array

In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.

Median Middle quantile of a data set or probability distribution

In statistics and probability theory, a median is a value separating the higher half from the lower half of a data sample, a population or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic advantage of the median in describing data compared to the mean is that it is not skewed so much by a small proportion of extremely large or small values, and so it may give a better idea of a "typical" value. For example, in understanding statistics like household income or assets, which vary greatly, the mean may be skewed by a small number of extremely high or low values. Median income, for example, may be a better way to suggest what a "typical" income is. Because of this, the median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median will not give an arbitrarily large or small result.

Normal distribution Probability distribution

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

Floor and ceiling functions Mathematical functions taking a real input and rounding it down or up, respectively

In mathematics and computer science, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted , or . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted , or .

Log-normal distribution Probability distribution

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and the population standard deviation is unknown. It was developed by William Sealy Gosset under the pseudonym Student.

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In computer science, the Akra–Bazzi method, or Akra–Bazzi theorem, is used to analyze the asymptotic behavior of the mathematical recurrences that appear in the analysis of divide and conquer algorithms where the sub-problems have substantially different sizes. It is a generalization of the master theorem for divide-and-conquer recurrences, which assumes that the sub-problems have equal size. It is named after mathematicians Mohamad Akra and Louay Bazzi.

Spearmans rank correlation coefficient statistic

In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.

In statistics, an effect size is a quantitative measure of the magnitude of a phenomenon. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter of a hypothetical statistical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. No such formula which is efficiently computable is known. A number of constraints are known, showing what such a "formula" can and cannot be.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

In number theory, the integer square root (isqrt) of a positive integer n is the positive integer m which is the greatest integer less than or equal to the square root of n,

In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation to the mean . The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R. In addition, CV is utilized by economists and investors in economic models.

Bakers map

In dynamical systems theory, the baker's map is a chaotic map from the unit square into itself. It is named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed.

The statistics of random permutations, such as the cycle structure of a random permutation are of fundamental importance in the analysis of algorithms, especially of sorting algorithms, which operate on random permutations. Suppose, for example, that we are using quickselect to select a random element of a random permutation. Quickselect will perform a partial sort on the array, as it partitions the array according to the pivot. Hence a permutation will be less disordered after quickselect has been performed. The amount of disorder that remains may be analysed with generating functions. These generating functions depend in a fundamental way on the generating functions of random permutation statistics. Hence it is of vital importance to compute these generating functions.

In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.

In stochastic analysis, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example a Wiener process. The theory was developed in the 1990s by Terry Lyons. Several accounts of the theory are available.

In analytic number theory, a Dirichlet series, or Dirichlet generating function (DGF), of a sequence is a common way of understanding and summing arithmetic functions in a meaningful way. A little known, or at least often forgotten about, way of expressing formulas for arithmetic functions and their summatory functions is to perform an integral transform that inverts the operation of forming the DGF of a sequence. This inversion is analogous to performing an inverse Z-transform to the generating function of a sequence to express formulas for the series coefficients of a given ordinary generating function.

References

  1. Johnson, Robert; Kuby, Patricia (2007), "Applied Example 2.15, The 85th Percentile Speed Limit: Going With 85% of the Flow", Elementary Statistics (10th ed.), Cengage Learning, p. 102, ISBN   9781111802493 .
  2. "Rational Speed Limits and the 85th Percentile Speed" (PDF). lsp.org. Louisiana State Police. Archived from the original (PDF) on 23 September 2018. Retrieved 28 October 2018.
  3. Hyndman RH, Fan Y (1996). "Sample quantiles in statistical packages". The American Statistician. 50 (4): 361–365. doi:10.2307/2684934. JSTOR   2684934.
  4. Lane, David. "Percentiles" . Retrieved 2007-09-15.
  5. 1 2 Pottel, Hans. "Statistical flaws in Excel" (PDF). Archived from the original (PDF) on 2013-06-04. Retrieved 2013-03-25.
  6. Schoonjans F, De Bacquer D, Schmid P (2011). "Estimation of population percentiles". Epidemiology. 22 (5): 750–751. doi:10.1097/EDE.0b013e318225c1de. PMC   3171208 . PMID   21811118.
  7. Baxter, Martin (2020), Quantile Estimation (PDF), Electoral Calculus.
  8. "Matlab Statistics Toolbox – Percentiles" . Retrieved 2006-09-15., This is equivalent to Method 5 discussed here
  9. Langford, E. (2006). "Quartiles in Elementary Statistics". Journal of Statistics Education. 14 (3). doi: 10.1080/10691898.2006.11910589 .
  10. "NumPy 1.12 documentation". SciPy . Retrieved 2017-03-19.
  11. 1 2 "Engineering Statistics Handbook: Percentile". NIST . Retrieved 2009-02-18.