In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean (or its absolute value, ), and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R,[ citation needed ] by economists and investors in economic models, and in psychology/neuroscience.
The coefficient of variation (CV) is defined as the ratio of the standard deviation to the mean , [1]
It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on an interval scale. [2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability.
Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements.
A more robust possibility is the quartile coefficient of dispersion, half the interquartile range divided by the average of the quartiles (the midhinge), .
In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a maximum-likelihood estimation approach. [3]
In the examples below, we will take the values given as randomly chosen from a larger population of values.
In these examples, we will take the values given as the entire population of values.
When only a sample of data from a population is available, the population CV can be estimated using the ratio of the sample standard deviation to the sample mean :
But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a biased estimator. For normally distributed data, an unbiased estimator [4] for a sample of size n is:
Many datasets follow an approximately log-normal distribution. [5] In such cases, a more accurate estimate, derived from the properties of the log-normal distribution, [6] [7] [8] is defined as:
where is the sample standard deviation of the data after a natural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation is converted to base e using , and the formula for remains the same. [9] ) This estimate is sometimes referred to as the "geometric CV" (GCV) [10] [11] in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood [12] as:
This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of itself.
For many practical purposes (such as sample size determination and calculation of confidence intervals) it is which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of or GCV by inverting the corresponding formula.
The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation.
The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an Erlang distribution) are considered low-variance, while those with CV > 1 (such as a hyper-exponential distribution) are considered high-variance[ citation needed ]. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range.
In actuarial science, the CV is known as unitized risk. [13]
In industrial solids processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached. [14]
In fluid dynamics, the CV, also referred to as Percent RMS, %RMS, %RMS Uniformity, or Velocity RMS, is a useful determination of flow uniformity for industrial processes. The term is used widely in the design of pollution control equipment, such as electrostatic precipitators (ESPs), [15] selective catalytic reduction (SCR), scrubbers, and similar devices. The Institute of Clean Air Companies (ICAC) references RMS deviation of velocity in the design of fabric filters (ICAC document F-7). [16] The guiding principal is that many of these pollution control devices require "uniform flow" entering and through the control zone. This can be related to uniformity of velocity profile, temperature distribution, gas species (such as ammonia for an SCR, or activated carbon injection for mercury absorption), and other flow-related parameters. The Percent RMS also is used to assess flow uniformity in combustion systems, HVAC systems, ductwork, inlets to fans and filters, air handling units, etc. where performance of the equipment is influenced by the incoming flow distribution.
CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required. [17] It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior. [18] If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the intraclass correlation coefficient are recommended. [19]
The coefficient of variation fulfills the requirements for a measure of economic inequality. [20] [21] [22] If x (with entries xi) is a list of the values of an economic indicator (e.g. wealth), with xi being the wealth of agent i, then the following requirements are met:
cv assumes its minimum value of zero for complete equality (all xi are equal). [22] Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the Gini coefficient which is constrained to be between 0 and 1). [22] It is, however, more mathematically tractable than the Gini coefficient.
Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts. [23] [24] Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies. [25] Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation. [26] Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs. [27] [28]
Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in Celsius and Fahrenheit (both relative units, where kelvin and Rankine scale are their associated absolute values):
Celsius: [0, 10, 20, 30, 40]
Fahrenheit: [32, 50, 68, 86, 104]
The sample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%.
If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute.
Comparing the same data set, now in absolute units:
Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15]
Rankine: [491.67, 509.67, 527.67, 545.67, 563.67]
The sample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%.
Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable , the coefficient of variation of is equal to the coefficient of variation of only when . In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form with , whereas Kelvins can be converted to Rankines through a transformation of the form .
Provided that negative and small positive values of the sample mean occur with negligible frequency, the probability distribution of the coefficient of variation for a sample of size of i.i.d. normal random variables has been shown by Hendricks and Robey to be [29]
where the symbol indicates that the summation is over only even values of , i.e., if is odd, sum over even values of and if is even, sum only over odd values of .
This is useful, for instance, in the construction of hypothesis tests or confidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based on McKay's chi-square approximation for the coefficient of variation. [30] [31] [32] [33] [34] [35] Methods for
Liu (2012) reviews methods for the construction of a confidence interval for the coefficient of variation. [36] Notably, Lehmann (1986) derived the sampling distribution for the coefficient of variation using a non-central t-distribution to give an exact method for the construction of the CI. [37]
Standardized moments are similar ratios, where is the kth moment about the mean, which are also dimensionless and scale invariant. The variance-to-mean ratio, , is another similar ratio, but is not dimensionless, and hence not scale invariant. See Normalization (statistics) for further ratios.
In signal processing, particularly image processing, the reciprocal ratio (or its square) is referred to as the signal-to-noise ratio in general and signal-to-noise ratio (imaging) in particular.
Other related ratios include:
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.
In probability theory and statistics, a standardized moment of a probability distribution is a moment that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. The shape of different probability distributions can be compared using standardized moments.
In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of children from a primary school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.
In statistics, the standard score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes are a complement tool for statistical hypothesis testing, and play an important role in power analyses to assess the sample size required for new experiments. Effect size are fundamental in meta-analyses which aim to provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
In statistics, a multimodaldistribution is a probability distribution with more than one mode. These appear as distinct peaks in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form multimodal distributions. Among univariate analyses, multimodal distributions are commonly bimodal.
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters. A pivot need not be a statistic — the function and its value can depend on the parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic.
In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased.
In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.
In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a probability distribution: it is a measure used to quantify whether a set of observed occurrences are clustered or dispersed compared to a standard statistical model.
Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.
In applied statistics, a variance-stabilizing transformation is a data transformation that is specifically chosen either to simplify considerations in graphical exploratory data analysis or to allow the application of simple regression-based or analysis of variance techniques.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.