Coefficient of variation

Last updated

In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD),[ citation needed ] is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation to the mean (or its absolute value, ). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R.[ citation needed ] In addition, CV is utilized by economists and investors in economic models.

Contents

Definition

The coefficient of variation (CV) is defined as the ratio of the standard deviation to the mean , [1]

It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on a ratio scale, that is, scales that have a meaningful zero and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on an interval scale. [2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on which scale you used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability.

Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements.

A more robust possibility is the quartile coefficient of dispersion, half the interquartile range divided by the average of the quartiles (the midhinge), .

In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a maximum-likelihood estimation approach. [3]

Examples

A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as

0 / 100 = 0

A data set of [90, 100, 110] has more variability. Its population standard deviation is 8.165 and its average is 100, giving the coefficient of variation as

8.165 / 100 = 0.08165

A data set of [1, 5, 6, 8, 10, 40, 65, 88] has still more variability. Its standard deviation is 32.9 and its average is 27.9, giving a coefficient of variation of

32.9 / 27.9 = 1.18

Examples of misuse

Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in Celsius and Fahrenheit (both relative units, where kelvin and Rankine scale are their associated absolute values):

Celsius: [0, 10, 20, 30, 40]

Fahrenheit: [32, 50, 68, 86, 104]

The sample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%.

If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute.

Comparing the same data set, now in absolute units:

Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15]

Rankine: [491.67, 509.67, 527.67, 545.67, 563.67]

The sample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%.

Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable , the coefficient of variation of is equal to the coefficient of variation of only when . In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form with , whereas Kelvins can be converted to Rankines through a transformation of the form .

Estimation

When only a sample of data from a population is available, the population CV can be estimated using the ratio of the sample standard deviation to the sample mean :

But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a biased estimator. For normally distributed data, an unbiased estimator [4] for a sample of size n is:

Log-normal data

In many applications, it can be assumed that data are log-normally distributed (evidenced by the presence of skewness in the sampled data). [5] In such cases, a more accurate estimate, derived from the properties of the log-normal distribution, [6] [7] [8] is defined as:

where is the sample standard deviation of the data after a natural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation is converted to base e using , and the formula for remains the same. [9] ) This estimate is sometimes referred to as the "geometric CV" (GCV) [10] [11] in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood [12] as:

This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of itself.

For many practical purposes (such as sample size determination and calculation of confidence intervals) it is which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of or GCV by inverting the corresponding formula.

Comparison to standard deviation

Advantages

The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation.

Disadvantages

Applications

The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an Erlang distribution) are considered low-variance, while those with CV > 1 (such as a hyper-exponential distribution) are considered high-variance[ citation needed ]. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range.

In actuarial science, the CV is known as unitized risk. [14]

In Industrial Solids Processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached. [15]

Laboratory measures of intra-assay and inter-assay CVs

CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required. [16] It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior. [13] If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the intraclass correlation coefficient are recommended. [17]

As a measure of economic inequality

The coefficient of variation fulfills the requirements for a measure of economic inequality. [18] [19] [20] If x (with entries xi) is a list of the values of an economic indicator (e.g. wealth), with xi being the wealth of agent i, then the following requirements are met:

cv assumes its minimum value of zero for complete equality (all xi are equal). [20] Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the Gini coefficient which is constrained to be between 0 and 1). [20] It is, however, more mathematically tractable than the Gini coefficient.

As a measure of standardisation of archaeological artefacts

Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts. [21] [22] Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies. [23] Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation. [24] Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs. [25] [26]

Distribution

Provided that negative and small positive values of the sample mean occur with negligible frequency, the probability distribution of the coefficient of variation for a sample of size of i.i.d. normal random variables has been shown by Hendricks and Robey to be [27]

where the symbol indicates that the summation is over only even values of , i.e., if is odd, sum over even values of and if is even, sum only over odd values of .

This is useful, for instance, in the construction of hypothesis tests or confidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based on McKay's chi-square approximation for the coefficient of variation [28] [29] [30] [31] [32] [33]

Alternative

According to Liu (2012), [34] Lehmann (1986). [35] "also derived the sample distribution of CV in order to give an exact method for the construction of a confidence interval for CV;" it is based on a non-central t-distribution.

Similar ratios

Standardized moments are similar ratios, where is the kth moment about the mean, which are also dimensionless and scale invariant. The variance-to-mean ratio, , is another similar ratio, but is not dimensionless, and hence not scale invariant. See Normalization (statistics) for further ratios.

In signal processing, particularly image processing, the reciprocal ratio (or its square) is referred to as the signal to noise ratio in general and signal-to-noise ratio (imaging) in particular.

Other related ratios include:

See also

Related Research Articles

Normal distribution Probability distribution

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

Standard deviation In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Skewness measure of the asymmetry of random variables

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

Variance Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

Log-normal distribution Probability distribution

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

Correlation Statistical concept

In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense correlation is any statistical association, though it actually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.

Standard score how many standard deviations apart from the mean an observed datum is

In statistics, the standard score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.

<i>Z</i>-test

A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-tests test the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value which makes it more convenient than the Student's t-test whose critical values are defined by the sample size.

In statistics, an effect size is a number measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest, and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

Multimodal distribution Probability distribution whose density has two or more distinct local maxima

In statistics, a bimodaldistribution is a probability distribution with two different modes, which may also be referred to as a bimodal distribution. These appear as distinct peaks in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form bimodal distributions.

In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters. A pivot quantity need not be a statistic—the function and its value can depend on the parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.

In probability theory and statistics, the index of dispersion, dispersion index,coefficient of dispersion,relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a probability distribution: it is a measure used to quantify whether a set of observed occurrences are clustered or dispersed compared to a standard statistical model.

Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.

In statistics, a paired difference test is a type of location test that is used when comparing two sets of measurements to assess whether their population means differ. A paired difference test uses additional information about the sample that is not present in an ordinary unpaired testing situation, either to increase the statistical power, or to reduce the effects of confounders.

References

  1. Everitt, Brian (1998). The Cambridge Dictionary of Statistics . Cambridge, UK New York: Cambridge University Press. ISBN   978-0521593465.
  2. "What is the difference between ordinal, interval and ratio variables? Why should I care?". GraphPad Software Inc. Archived from the original on 15 December 2008. Retrieved 22 February 2008.
  3. Odic, Darko; Im, Hee Yeon; Eisinger, Robert; Ly, Ryan; Halberda, Justin (June 2016). "PsiMLE: A maximum-likelihood estimation approach to estimating psychophysical scaling and variability more reliably, efficiently, and flexibly". Behavior Research Methods. 48 (2): 445–462. doi: 10.3758/s13428-015-0600-5 . ISSN   1554-3528. PMID   25987306.
  4. Sokal RR & Rohlf FJ. Biometry (3rd Ed). New York: Freeman, 1995. p. 58. ISBN   0-7167-2411-1
  5. Limpert, Eckhard; Stahel, Werner A.; Abbt, Markus (2001). "Log-normal Distributions across the Sciences: Keys and Clues". BioScience. 51 (5): 341–352. doi: 10.1641/0006-3568(2001)051[0341:LNDATS]2.0.CO;2 .
  6. Koopmans, L. H.; Owen, D. B.; Rosenblatt, J. I. (1964). "Confidence intervals for the coefficient of variation for the normal and log normal distributions". Biometrika. 51 (1–2): 25–32. doi:10.1093/biomet/51.1-2.25.
  7. Diletti, E; Hauschke, D; Steinijans, VW (1992). "Sample size determination for bioequivalence assessment by means of confidence intervals". International Journal of Clinical Pharmacology, Therapy, and Toxicology. 30 Suppl 1: S51–8. PMID   1601532.
  8. Julious, Steven A.; Debarnot, Camille A. M. (2000). "Why Are Pharmacokinetic Data Summarized by Arithmetic Means?". Journal of Biopharmaceutical Statistics. 10 (1): 55–71. doi:10.1081/BIP-100101013. PMID   10709801.
  9. Reed, JF; Lynn, F; Meade, BD (2002). "Use of Coefficient of Variation in Assessing Variability of Quantitative Assays". Clin Diagn Lab Immunol. 9 (6): 1235–1239. doi:10.1128/CDLI.9.6.1235-1239.2002. PMC   130103 . PMID   12414755.
  10. Sawant,S.; Mohan, N. (2011) "FAQ: Issues with Efficacy Analysis of Clinical Trial Data Using SAS" Archived 24 August 2011 at the Wayback Machine , PharmaSUG2011, Paper PO08
  11. Schiff, MH; et al. (2014). "Head-to-head, randomised, crossover study of oral versus subcutaneous methotrexate in patients with rheumatoid arthritis: drug-exposure limitations of oral methotrexate at doses >=15 mg may be overcome with subcutaneous administration". Ann Rheum Dis. 73 (8): 1–3. doi:10.1136/annrheumdis-2014-205228. PMC   4112421 . PMID   24728329.
  12. Kirkwood, TBL (1979). "Geometric means and measures of dispersion". Biometrics. 35 (4): 908–9. JSTOR   2530139.
  13. 1 2 Eisenberg, Dan (2015). "Improving qPCR telomere length assays: Controlling for well position effects increases statistical power". American Journal of Human Biology. 27 (4): 570–5. doi:10.1002/ajhb.22690. PMC   4478151 . PMID   25757675.
  14. Broverman, Samuel A. (2001). Actex study manual, Course 1, Examination of the Society of Actuaries, Exam 1 of the Casualty Actuarial Society (2001 ed.). Winsted, CT: Actex Publications. p. 104. ISBN   9781566983969 . Retrieved 7 June 2014.
  15. "Measuring Degree of Mixing - Homogeneity of powder mix - Mixture quality - PowderProcess.net". www.powderprocess.net. Archived from the original on 14 November 2017. Retrieved 2 May 2018.
  16. Rodbard, D (October 1974). "Statistical quality control and routine data processing for radioimmunoassays and immunoradiometric assays". Clinical Chemistry. 20 (10): 1255–70. doi:10.1093/clinchem/20.10.1255. PMID   4370388.
  17. Eisenberg, Dan T. A. (30 August 2016). "Telomere length measurement validity: the coefficient of variation is invalid and cannot be used to compare quantitative polymerase chain reaction and Southern blot telomere length measurement technique". International Journal of Epidemiology. 45 (4): 1295–1298. doi: 10.1093/ije/dyw191 . ISSN   0300-5771. PMID   27581804.
  18. Champernowne, D. G.; Cowell, F. A. (1999). Economic Inequality and Income Distribution. Cambridge University Press.
  19. Campano, F.; Salvatore, D. (2006). Income distribution. Oxford University Press.
  20. 1 2 3 4 5 Bellu, Lorenzo Giovanni; Liberati, Paolo (2006). "Policy Impacts on Inequality – Simple Inequality Measures" (PDF). EASYPol, Analytical tools. Policy Support Service, Policy Assistance Division, FAO. Archived (PDF) from the original on 5 August 2016. Retrieved 13 June 2016.
  21. Eerkens, Jelmer W.; Bettinger, Robert L. (July 2001). "Techniques for Assessing Standardization in Artifact Assemblages: Can We Scale Material Variability?". American Antiquity. 66 (3): 493–504. doi:10.2307/2694247. JSTOR   2694247.
  22. Roux, Valentine (2003). "Ceramic Standardization and Intensity of Production: Quantifying Degrees of Specialization". American Antiquity. 68 (4): 768–782. doi:10.2307/3557072. ISSN   0002-7316. JSTOR   3557072.
  23. Bettinger, Robert L.; Eerkens, Jelmer (April 1999). "Point Typologies, Cultural Transmission, and the Spread of Bow-and-Arrow Technology in the Prehistoric Great Basin". American Antiquity. 64 (2): 231–242. doi:10.2307/2694276. JSTOR   2694276.
  24. Wang, Li-Ying; Marwick, Ben (October 2020). "Standardization of ceramic shape: A case study of Iron Age pottery from northeastern Taiwan". Journal of Archaeological Science: Reports. 33: 102554. doi:10.1016/j.jasrep.2020.102554.
  25. Krishnamoorthy, K.; Lee, Meesook (February 2014). "Improved tests for the equality of normal coefficients of variation". Computational Statistics. 29 (1–2): 215–232. doi:10.1007/s00180-013-0445-2.
  26. Marwick, Ben; Krishnamoorthy, K (2019). cvequality: Tests for the equality of coefficients of variation from multiple groups. R package version 0.2.0.
  27. Hendricks, Walter A.; Robey, Kate W. (1936). "The Sampling Distribution of the Coefficient of Variation". The Annals of Mathematical Statistics. 7 (3): 129–32. doi: 10.1214/aoms/1177732503 . JSTOR   2957564.
  28. Iglevicz, Boris; Myers, Raymond (1970). "Comparisons of approximations to the percentage points of the sample coefficient of variation". Technometrics. 12 (1): 166–169. doi:10.2307/1267363. JSTOR   1267363.
  29. Bennett, B. M. (1976). "On an approximate test for homogeneity of coefficients of variation". Contributions to Applied Statistics Dedicated to A. Linder. Experientia Supplementum. 22: 169–171. doi:10.1007/978-3-0348-5513-6_16. ISBN   978-3-0348-5515-0.
  30. Vangel, Mark G. (1996). "Confidence intervals for a normal coefficient of variation". The American Statistician. 50 (1): 21–26. doi:10.1080/00031305.1996.10473537. JSTOR   2685039..
  31. Feltz, Carol J; Miller, G. Edward (1996). "An asymptotic test for the equality of coefficients of variation from k populations". Statistics in Medicine. 15 (6): 647. doi:10.1002/(SICI)1097-0258(19960330)15:6<647::AID-SIM184>3.0.CO;2-P.
  32. Forkman, Johannes (2009). "Estimator and tests for common coefficients of variation in normal distributions" (PDF). Communications in Statistics – Theory and Methods. 38 (2): 21–26. doi:10.1080/03610920802187448. Archived (PDF) from the original on 6 December 2013. Retrieved 23 September 2013.
  33. Krishnamoorthy, K; Lee, Meesook (2013). "Improved tests for the equality of normal coefficients of variation". Computational Statistics. 29 (1–2): 215–232. doi:10.1007/s00180-013-0445-2.
  34. Liu, Shuang (2012). Confidence Interval Estimation for Coefficient of Variation (Thesis). Georgia State University. p.3. Archived from the original on 1 March 2014. Retrieved 25 February 2014.
  35. Lehmann, E. L. (1986). Testing Statistical Hypothesis. 2nd ed. New York: Wiley.