In statistics, a k-thpercentile, also known as percentile score or centile, is a score below which a given percentage k of scores in its frequency distribution falls ("exclusive" definition) or a score at or below which a given percentage falls ("inclusive" definition). Percentiles are expressed in the same unit of measurement as the input scores, not in percent; for example, if the scores refer to human weight, the corresponding percentiles will be expressed in kilograms or pounds. In the limit of an infinite sample size, the percentile approximates the percentile function , the inverse of the cumulative distribution function.
Percentiles are a type of quantiles, obtained adopting a subdivision into 100 groups. The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3). For example, the 50th percentile (median) is the score below (or at or below, depending on the definition) which 50% of the scores in the distribution are found.
A related quantity is the percentile rank of a score, expressed in percent, which represents the fraction of scores in its distribution that are less than it, an exclusive definition. Percentile scores and percentile ranks are often used in the reporting of test scores from norm-referenced tests, but, as just noted, they are not the same. For percentile ranks, a score is given and a percentage is computed. Percentile ranks are exclusive: if the percentile rank for a specified score is 90%, then 90% of the scores were lower. In contrast, for percentiles a percentage is given and a corresponding score is determined, which can be either exclusive or inclusive. The score for a specified percentage (e.g., 90th) indicates a score below which (exclusive definition) or at or below which (inclusive definition) other scores in the distribution fall.
There is no standard definition of percentile, [1] [2] [3] however all definitions yield similar results when the number of observations is very large and the probability distribution is continuous. [4] In the limit, as the sample size approaches infinity, the 100pth percentile (0<p<1) approximates the inverse of the cumulative distribution function (CDF) thus formed, evaluated at p, as p approximates the CDF. This can be seen as a consequence of the Glivenko–Cantelli theorem. Some methods for calculating the percentiles are given below.
The methods given in the calculation methods section (below) are approximations for use in small-sample statistics. In general terms, for very large populations following a normal distribution, percentiles may often be represented by reference to a normal curve plot. The normal distribution is plotted along an axis scaled to standard deviations, or sigma () units. Mathematically, the normal distribution extends to negative infinity on the left and positive infinity on the right. Note, however, that only a very small proportion of individuals in a population will fall outside the −3σ to +3σ range. For example, with human heights very few people are above the +3σ height level.
Percentiles represent the area under the normal curve, increasing from left to right. Each standard deviation represents a fixed percentile. Thus, rounding to two decimal places, −3σ is the 0.13th percentile, −2σ the 2.28th percentile, −1σ the 15.87th percentile, 0σ the 50th percentile (both the mean and median of the distribution), +1σ the 84.13th percentile, +2σ the 97.72nd percentile, and +3σ the 99.87th percentile. This is related to the 68–95–99.7 rule or the three-sigma rule. Note that in theory the 0th percentile falls at negative infinity and the 100th percentile at positive infinity, although in many practical applications, such as test results, natural lower and/or upper limits are enforced.
When ISPs bill "burstable" internet bandwidth, the 95th or 98th percentile usually cuts off the top 5% or 2% of bandwidth peaks in each month, and then bills at the nearest rate. In this way, infrequent peaks are ignored, and the customer is charged in a fairer way. The reason this statistic is so useful in measuring data throughput is that it gives a very accurate picture of the cost of the bandwidth. The 95th percentile says that 95% of the time, the usage is below this amount: so, the remaining 5% of the time, the usage is above that amount.
Physicians will often use infant and children's weight and height to assess their growth in comparison to national averages and percentiles which are found in growth charts.
The 85th percentile speed of traffic on a road is often used as a guideline in setting speed limits and assessing whether such a limit is too high or low. [5] [6]
In finance, value at risk is a standard measure to assess (in a model-dependent way) the quantity under which the value of the portfolio is not expected to sink within a given period of time and given a confidence value.
This section possibly contains synthesis of material which does not verifiably mention or relate to the main topic.(February 2023) |
There are many formulas or algorithms [7] for a percentile score. Hyndman and Fan [1] identified nine and most statistical and spreadsheet software use one of the methods they describe. [8] Algorithms either return the value of a score that exists in the set of scores (nearest-rank methods) or interpolate between existing scores and are either exclusive or inclusive.
PC: percentile specified | 0.10 | 0.25 | 0.50 | 0.75 | 0.90 |
---|---|---|---|---|---|
N: Number of scores | 10 | 10 | 10 | 10 | 10 |
OR: ordinal rank = PC × N | 1 | 2.5 | 5 | 7.5 | 9 |
Rank: >OR / ≥OR | 2/1 | 3/3 | 6/5 | 8/8 | 10/9 |
Score at rank (exc/inc) | 2/1 | 3/3 | 4/3 | 5/5 | 7/5 |
The figure shows a 10-score distribution, illustrates the percentile scores that result from these different algorithms, and serves as an introduction to the examples given subsequently. The simplest are nearest-rank methods that return a score from the distribution, although compared to interpolation methods, results can be a bit crude. The Nearest-Rank Methods table shows the computational steps for exclusive and inclusive methods.
PC: percentile specified | 0.10 | 0.25 | 0.50 | 0.75 | 0.90 |
---|---|---|---|---|---|
N: number of scores | 10 | 10 | 10 | 10 | 10 |
OR: PC×(N+1) / PC×(N−1)+1 | 1.1/1.9 | 2.75/3.25 | 5.5/5.5 | 8.25/7.75 | 9.9/9.1 |
LoRank: OR truncated | 1/1 | 2/3 | 5/5 | 8/7 | 9/9 |
HIRank: OR rounded up | 2/2 | 3/4 | 6/6 | 9/8 | 10/10 |
LoScore: score at LoRank | 1/1 | 2/3 | 3/3 | 5/4 | 5/5 |
HiScore: score at HiRank | 2/2 | 3/3 | 4/4 | 5/5 | 7/7 |
Difference: HiScore − LoScore | 1/1 | 1/0 | 1/1 | 0/1 | 2/2 |
Mod: fractional part of OR | 0.1/0.9 | 0.75/0.25 | 0.5/0.5 | 0.25/0.75 | 0.9/0.1 |
Interpolated score (exc/inc) = LoScore + Mod × Difference | 1.1/1.9 | 2.75/3 | 3.5/3.5 | 5/4.75 | 6.8/5.2 |
Interpolation methods, as the name implies, can return a score that is between scores in the distribution. Algorithms used by statistical programs typically use interpolation methods, for example, the percentile.exc and percentile.inc functions in Microsoft Excel. The Interpolated Methods table shows the computational steps.
One definition of percentile, often given in texts, is that the P-th percentile of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula
An alternative to rounding used in many applications is to use linear interpolation between adjacent ranks.
All of the following variants have the following in common. Given the order statistics
we seek a linear interpolation function that passes through the points . This is simply accomplished by
where uses the floor function to represent the integral part of positive x, whereas uses the mod function to represent its fractional part (the remainder after division by 1). (Note that, though at the endpoint , is undefined, it does not need to be because it is multiplied by .) As we can see, x is the continuous version of the subscript i, linearly interpolating v between adjacent nodes.
There are two ways in which the variant approaches differ. The first is in the linear relationship between the rankx, the percent rank, and a constant that is a function of the sample size N:
There is the additional requirement that the midpoint of the range , corresponding to the median, occur at :
and our revised function now has just one degree of freedom, looking like this:
The second way in which the variants differ is in the definition of the function near the margins of the range of p: should produce, or be forced to produce, a result in the range , which may mean the absence of a one-to-one correspondence in the wider region. One author has suggested a choice of where ξ is the shape of the Generalized extreme value distribution which is the extreme value limit of the sampled distribution.
(Sources: Matlab "prctile" function, [9] [10] )
where
Furthermore, let
The inverse relationship is restricted to a narrower region:
(Source: Some software packages, including NumPy [11] and Microsoft Excel [3] (up to and including version 2013 by means of the PERCENTILE.INC function). Noted as an alternative by NIST [8] )
Note that the relationship is one-to-one for , the only one of the three variants with this property; hence the "INC" suffix, for inclusive, on the Excel function.
(The primary variant recommended by NIST. [8] Adopted by Microsoft Excel since 2010 by means of PERCENTIL.EXC function. However, as the "EXC" suffix indicates, the Excel version excludes both endpoints of the range of p, i.e., , whereas the "INC" version, the second variant, does not; in fact, any number smaller than is also excluded and would cause an error.)
The inverse is restricted to a narrower region:
In addition to the percentile function, there is also a weighted percentile, where the percentage in the total weight is counted instead of the total number. There is no standard function for a weighted percentile. One method extends the above approach in a natural way.
Suppose we have positive weights associated, respectively, with our N sorted sample values. Let
the sum of the weights. Then the formulas above are generalized by taking
or
and
The 50% weighted percentile is known as the weighted median.
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .
A triangular wave or triangle wave is a non-sinusoidal waveform named for its triangular shape. It is a periodic, piecewise linear, continuous real function.
In mathematics and computer science, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
In computer science, the Akra–Bazzi method, or Akra–Bazzi theorem, is used to analyze the asymptotic behavior of the mathematical recurrences that appear in the analysis of divide and conquer algorithms where the sub-problems have substantially different sizes. It is a generalization of the master theorem for divide-and-conquer recurrences, which assumes that the sub-problems have equal size. It is named after mathematicians Mohamad Akra and Louay Bazzi.
In mathematics, the Farey sequence of order n is the sequence of completely reduced fractions, either between 0 and 1, or without this restriction, which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size.
In number theory, the Mertens function is defined for all positive integers n as
In mathematics, the sign function or signum function is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as .
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. No such formula which is efficiently computable is known. A number of constraints are known, showing what such a "formula" can and cannot be.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,
In mathematics, a pairing function is a process to uniquely encode two natural numbers into a single natural number.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another.
In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.
In mathematics, Hermite's identity, named after Charles Hermite, gives the value of a summation involving the floor function. It states that for every real number x and for every positive integer n the following identity holds:
In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.
In mathematics, the Dedekind numbers are a rapidly growing sequence of integers named after Richard Dedekind, who defined them in 1897. The Dedekind number M(n) is the number of monotone boolean functions of n variables. Equivalently, it is the number of antichains of subsets of an n-element set, the number of elements in a free distributive lattice with n generators, and one more than the number of abstract simplicial complexes on a set with n elements.
In probability theory and statistics, the Hermite distribution, named after Charles Hermite, is a discrete probability distribution used to model count data with more than one parameter. This distribution is flexible in terms of its ability to allow a moderate over-dispersion in the data.
In fractal geometry, the Higuchi dimension (or Higuchi fractal dimension (HFD)) is an approximate value for the box-counting dimension of the graph of a real-valued function or time series. This value is obtained via an algorithmic approximation so one also talks about the Higuchi method. It has many applications in science and engineering and has been applied to subjects like characterizing primary waves in seismograms, clinical neurophysiology and analyzing changes in the electroencephalogram in Alzheimer’s disease.