Normalization (statistics)

Last updated

In statistics and applications of statistics, normalization can have a range of meanings. [1] In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.

Contents

In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios).

In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters.

History

Standardscore (Z-score)

The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one [2] .

While the study of normal distribution structured the process of standardization, the result of this process, also known as the Z-score, given by the difference between sample value and population mean divided by population standard deviation and measuring the number of standard deviations of a value from its population mean [3] , was not formalized and popularized until Ronald Fisher and Karl Pearson elaborated the concept as part of the broader framework of statistical inference and hypothesis testing [4] [5] in the early 20th century.

Student’s t-statistic

William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison [6] . This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908 [7] . Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed by Ronald Fisher to the form that is used today [8] , and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to the test statistic used in measuring the departure of the estimated value of a parameter from its hypothesized value divided by its standard error, popularized through Fisher’s publishment titled Applications of “Student’s” distribution [6] .

Feature Scaling

The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century [9] [10] .

Batch Normalization

Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training in neural networks [11] .

Examples

There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios.

NameFormulaUse
Standard score Normalizing errors when population parameters are known. Works well for populations that are normally distributed [12]
Student's t-statistic the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error.
Studentized residual Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis.
Standardized moment Normalizing moments, using the standard deviation as a measure of scale.
Coefficient of
variation
Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution.
Min-max feature scaling Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example .

Note that some other ratios, such as the variance-to-mean ratio , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant.

Other types

Other non-dimensional normalizations that can be used with no assumptions on the distribution include:

See also

References

  1. Dodge, Y (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN   0-19-920613-9 (entry for normalization of scores)
  2. Stigler, Stephen M. (2002). Statistics on the table: the history of statistical concepts and methods (3. printing ed.). Cambridge, Mass.: Harvard Univ. Press. ISBN   978-0-674-00979-0.
  3. Lang, Niklas (August 23, 2023). "What is the Z-score? | Data Basecamp" . Retrieved March 13, 2025.
  4. Fisher, R. A. (January 1, 2017). Statistical Methods For Research Workers. Gyan Books. ISBN   978-9351286585.
  5. Pearson, Karl (November 1, 1901). "LIII. On lines and planes of closest fit to systems of points in space". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 2 (11): 559–572. doi:10.1080/14786440109462720. ISSN   1941-5982.
  6. 1 2 Brown, Angus (2008). "The strange origins of the Student's t-test". Physiology News: 13–16. doi:10.36866/pn.71.13 . Retrieved March 13, 2025.
  7. Student (1908). "The Probable Error of a Mean". Biometrika. 6 (1): 1–25. doi:10.2307/2331554. ISSN   0006-3444. JSTOR   2331554.
  8. Rohlf, F. James; Sokal, Robert R. (2012). Statistical tables (4th ed.). New York (N.Y.): Freeman. ISBN   978-1-4292-4031-4.
  9. Duda, Richard O.; Hart, Peter E.; Stork, David G. (2001). Pattern classification (2nd ed.). New York: Wiley. ISBN   978-0-471-05669-0.
  10. Bishop, Christopher M. (2006). Pattern recognition and machine learning. Information science and statistics. New York: Springer. ISBN   978-0-387-31073-2.
  11. Ioffe, Sergey; Szegedy, Christian (March 2, 2015), Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv: 1502.03167 , retrieved March 13, 2025
  12. Freedman, David; Pisani, Robert; Purves, Roger (February 20, 2007). Statistics: Fourth International Student Edition. W.W. Norton & Company. ISBN   9780393930436.