In statistics and applications of statistics, normalization can have a range of meanings. [1] In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.
In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios).
In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters.
The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one [2] .
While the study of normal distribution structured the process of standardization, the result of this process, also known as the Z-score, given by the difference between sample value and population mean divided by population standard deviation and measuring the number of standard deviations of a value from its population mean [3] , was not formalized and popularized until Ronald Fisher and Karl Pearson elaborated the concept as part of the broader framework of statistical inference and hypothesis testing [4] [5] in the early 20th century.
William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison [6] . This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908 [7] . Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed by Ronald Fisher to the form that is used today [8] , and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to the test statistic used in measuring the departure of the estimated value of a parameter from its hypothesized value divided by its standard error, popularized through Fisher’s publishment titled Applications of “Student’s” distribution [6] .
The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century [9] [10] .
Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training in neural networks [11] .
There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios.
Name | Formula | Use |
---|---|---|
Standard score | Normalizing errors when population parameters are known. Works well for populations that are normally distributed [12] | |
Student's t-statistic | the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error. | |
Studentized residual | Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis. | |
Standardized moment | Normalizing moments, using the standard deviation as a measure of scale. | |
Coefficient of variation | Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution. | |
Min-max feature scaling | Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example . | |
Note that some other ratios, such as the variance-to-mean ratio , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant.
Other non-dimensional normalizations that can be used with no assumptions on the distribution include: