Histogram

Last updated
Histogram
Histogram of arrivals per minute.svg
One of the Seven Basic Tools of Quality
First described by Karl Pearson
PurposeTo roughly assess the probability distribution of a given variable by depicting the frequencies of observations occurring in certain ranges of values.

A histogram is an accurate representation of the distribution of numerical data. It is an estimate of the probability distribution of a continuous variable (CORAL ) and was first introduced by Karl Pearson. [1] It differs from a bar graph, in the sense that a bar graph relates two variables, but a histogram relates only one. To construct a histogram, the first step is to "bin" (or "bucket") the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent, and are often (but are not required to be) of equal size. [2]

In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample.

In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. Examples of random phenomena can include the results of an experiment or survey.

Data binning is a data pre-processing technique used to reduce the effects of minor observation errors. The original data values which fall in a given small interval, a bin, are replaced by a value representative of that interval, often the central value. It is a form of quantization.

Contents

If the bins are of equal size, a rectangle is erected over the bin with height proportional to the frequency—the number of cases in each bin. A histogram may also be normalized to display "relative" frequencies. It then shows the proportion of cases that fall into each of several categories, with the sum of the heights equaling 1.

Frequency (statistics) number of times the event occurred in an experiment or study

In statistics the frequency of an event is the number of times the event occurred in an experiment or study. These frequencies are often graphically represented in histograms.

In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.

Categorization is the process in which ideas and objects are recognized, differentiated, and understood. Categorization implies that objects are grouped into categories, usually for some specific purpose. Ideally, a category illuminates a relationship between the subjects and objects of knowledge. Categorization is fundamental in language, prediction, inference, decision making and in all kinds of environmental interaction. It is indicated that categorization plays a major role in computer programming.

However, bins need not be of equal width; in that case, the erected rectangle is defined to have its area proportional to the frequency of cases in the bin. [3] The vertical axis is then not the frequency but frequency density—the number of cases per unit of the variable on the horizontal axis. Examples of variable bin width are displayed on Census bureau data below.

As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous. [4]

Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.

Density estimation construction of an estimate, based on observed data, of an unobservable underlying probability density function

In probability and statistics, density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.

Probability density function Function whose integral over a region describes the probability of an event occurring in that region

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently.

Kernel density estimation estimator

In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form.

The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics.

An alternative to kernel density estimation is the average shifted histogram, [5] which is fast to compute and gives a smooth curve estimate of the density without using kernels.

The histogram is one of the seven basic tools of quality control. [6]

Histograms are sometimes confused with bar charts. A histogram is used for continuous data, where the bins represent ranges of data, while a bar chart is a plot of categorical variables. Some authors recommend that bar charts have gaps between the rectangles to clarify the distinction. [7]

Etymology

An example histogram of the heights of 31 Black Cherry trees. Black cherry tree histogram.svg
An example histogram of the heights of 31 Black Cherry trees.

The etymology of the word histogram is uncertain. Sometimes it is said to be derived from the Ancient Greek ἱστός (histos) – "anything set upright" (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); and γράμμα (gramma) – "drawing, record, writing". It is also said that Karl Pearson, who introduced the term in 1891, derived the name from "historical diagram". [8]

Examples

This is the data for the histogram to the right, using 500 items:

Example histogram.png
BinCount
0 to 10010
100 to 20015
200 to 30021
300 to 40045
400 to 50035
500 to 60014
2.5 to 3.4923

The words used to describe the patterns in a histogram are: "symmetric", "skewed left" or "right", "unimodal", "bimodal" or "multimodal".

It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant.

The U.S. Census Bureau found that there were 124 million people who work outside of their homes. [9] Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time.[ citation needed ] The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people.[ citation needed ]

Histogram of travel time (to work), US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table. Travel time histogram total n Stata.png
Histogram of travel time (to work), US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table.
Data by absolute numbers
IntervalWidthQuantityQuantity/width
054180836
55136872737
105186183723
155196343926
205179813596
25571901438
305163693273
3553212642
4054122824
45159200613
60306461215
9060343557

This histogram shows the number of cases per unit interval as the height of each block, so that the area of each block is equal to the number of people in the survey who fall into its category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands.

Data by proportion
IntervalWidthQuantity (Q)Q/total/width
0541800.0067
55136870.0221
105186180.0300
155196340.0316
205179810.0290
25571900.0116
305163690.0264
35532120.0052
40541220.0066
451592000.0049
603064610.0017
906034350.0005

This histogram differs from the first only in the vertical scale. The area of each block is the fraction of the total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all"). The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram.

In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.) [10]

Mathematical definition

An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1. Cumulative vs normal histogram.svg
An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1.

In a more general mathematical sense, a histogram is a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:

Cumulative histogram

A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:

Number of bins and width

There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old as Graunt's work in the 17th century, but no systematic guidelines were given [11] until Sturges's work in 1926. [12]

Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used.

Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb. [13]

The number of bins k can be assigned directly or can be calculated from a suggested bin width h as:

The braces indicate the ceiling function.

Square-root choice

which takes the square root of the number of data points in the sample (used by Excel histograms and many others) and rounds to the next integer. [14]

Sturges' formula

Sturges' formula [12] is derived from a binomial distribution and implicitly assumes an approximately normal distribution.

It implicitly bases the bin sizes on the range of the data and can perform poorly if n < 30, because the number of bins will be small—less than seven—and unlikely to show trends in the data well. It may also perform poorly if the data are not normally distributed.

Rice Rule

The Rice Rule [15] is presented as a simple alternative to Sturges's rule.

Doane's formula

Doane's formula [16] is a modification of Sturges' formula which attempts to improve its performance with non-normal data.

where is the estimated 3rd-moment-skewness of the distribution and

Scott's normal reference rule

where is the sample standard deviation. Scott's normal reference rule [17] is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate. [11]

Freedman–Diaconis' choice

The Freedman–Diaconis rule is: [18] [11]

which is based on the interquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data.

Minimizing cross-validation estimated squared error

This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation: [19] [20]

Here, is the number of datapoints in the kth bin, and choosing the value of h that minimizes J will minimize integrated mean squared error.

Shimazaki and Shinomoto's choice

The choice is based on minimization of an estimated L2 risk function [21]

where and are mean and biased variance of a histogram with bin-width , and .

Variable bin widths

Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to choose equiprobable bins, where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has samples. When plotting the histogram, the frequency density is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution.

For equiprobable bins, the following rule for the number of bins is suggested: [22]

This choice of bins is motivated by maximizing the power of a Pearson chi-squared test testing whether the bins do contain equal numbers of samples. More specifically, for a given confidence interval it is recommended to choose between 1/2 and 1 times the following equation: [23]

Where is the probit function. Following this rule for would give between and ; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum.

Remark

A good reason why the number of bins should be proportional to is the following: suppose that the data are obtained as independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" as tends to infinity. If is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of order and the relative standard error is of order . Comparing to the next bin, the relative change of the frequency is of order provided that the derivative of the density is non-zero. These two are of the same order if is of order , so that is of order . This simple cubic root choice can also be applied to bins with non-constant width.

Histogram and density function for a Gumbel distribution Gumbel distribtion.png
Histogram and density function for a Gumbel distribution

Applications

See also

Related Research Articles

Median quantile

The median is the value separating the higher half from the lower half of a data sample. For a data set, it may be thought of as the "middle" value. For example, in the data set {1, 3, 3, 6, 7, 8, 9}, the median is 6, the fourth largest, and also the fifth smallest, number in the sample. For a continuous probability distribution, the median is the value such that a number is equally likely to fall above or below it.

There are several kinds of means in various branches of mathematics.

Skewness measure of the asymmetry of random variables

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or undefined.

Box plot method for graphically depicting groups of numerical data through their quartiles

In descriptive statistics, a box plot or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted as individual points. Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution. The spacings between the different parts of the box indicate the degree of dispersion (spread) and skewness in the data, and show outliers. In addition to the points themselves, they allow one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically. Box plots received their name from the box in the middle.

Order statistic

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

In mathematics, a moment is a specific quantitative measure of the shape of a function. It is used in both mechanics and statistics. If the function represents physical density, then the zeroth moment is the total mass, the first moment divided by the total mass is the center of mass, and the second moment is the rotational inertia. If the function is a probability distribution, then the zeroth moment is the total probability, the first moment is the mean, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

Multimodal distribution

In statistics, a bimodal distribution is a continuous probability distribution with two different modes. These appear as distinct peaks in the probability density function, as shown in Figures 1 and 2.

The mode of a set of data values is the value that appears most often. If X is a discrete random variable, the mode is the value x at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled.

Uniform distribution (continuous) uniform distribution on an interval

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b). It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distribution that is of interest, but a distribution may have a heavy left tail, or both tails may be heavy.

Skew normal distribution

In probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness.

In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, evaluation of the status of biological systems and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations.

Poisson distribution discrete probability distribution

In probability theory and statistics, the Poisson distribution, named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.

Kernel density estimation is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics. It can be viewed as a generalisation of histogram density estimation with improved statistical properties. Apart from histograms, other types of density estimators include parametric, spline, wavelet and Fourier series. Kernel density estimators were first introduced in the scientific literature for univariate data in the 1950s and 1960s and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to multivariate statistics. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to its univariate counterparts.

Hermite distribution

In probability theory and statistics, the Hermite distribution, named after Charles Hermite, is a discrete probability distribution used to model count data with more than one parameter. This distribution is flexible in terms of its ability to allow a moderate over-dispersion in the data.

References

  1. Pearson, K. (1895). "Contributions to the Mathematical Theory of Evolution. II. Skew Variation in Homogeneous Material". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 186: 343–414. Bibcode:1895RSPTA.186..343P. doi:10.1098/rsta.1895.0010.
  2. Howitt, D.; Cramer, D. (2008). Introduction to Statistics in Psychology (Fourth ed.). Prentice Hall. ISBN   978-0-13-205161-3.
  3. Freedman, D.; Pisani, R.; Purves, R. (1998). Statistics (Third ed.). W. W. Norton. ISBN   978-0-393-97083-8.
  4. Charles Stangor (2011) "Research Methods For The Behavioral Sciences". Wadsworth, Cengage Learning. ISBN   9780840031976.
  5. David W. Scott (December 2009). "Averaged shifted histogram". Wiley Interdisciplinary Reviews: Computational Statistics. 2:2 (2): 160–164. doi:10.1002/wics.54.
  6. Nancy R. Tague (2004). "Seven Basic Quality Tools". The Quality Toolbox. Milwaukee, Wisconsin: American Society Quality. p. 15. Retrieved 2010-02-05.
  7. Naomi, Robbins. "A Histogram is NOT a Bar Chart". Forbes.com. Forbes. Retrieved 31 July 2018.
  8. M. Eileen Magnello (December 2006). "Karl Pearson and the Origins of Modern Statistics: An Elastician becomes a Statistician". The New Zealand Journal for the History and Philosophy of Science and Technology. 1 volume. OCLC   682200824.
  9. US 2000 census.
  10. Dean, S., & Illowsky, B. (2009, February 19). Descriptive Statistics: Histogram. Retrieved from the Connexions Web site: http://cnx.org/content/m16298/1.11/
  11. 1 2 3 Scott, David W. (1992). Multivariate Density Estimation: Theory, Practice, and Visualization. New York: John Wiley.
  12. 1 2 Sturges, H. A. (1926). "The choice of a class interval". Journal of the American Statistical Association. 21 (153): 65–66. doi:10.1080/01621459.1926.10502161. JSTOR   2965501.
  13. e.g. § 5.6 "Density Estimation", W. N. Venables and B. D. Ripley, Modern Applied Statistics with S (2002), Springer, 4th edition. ISBN   0-387-95457-0.
  14. "EXCEL Univariate: Histogram".
  15. Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University (chapter 2 "Graphing Distributions", section "Histograms")
  16. Doane DP (1976) Aesthetic frequency classification. American Statistician, 30: 181–183
  17. Scott, David W. (1979). "On optimal and data-based histograms". Biometrika. 66 (3): 605–610. doi:10.1093/biomet/66.3.605.
  18. Freedman, David; Diaconis, P. (1981). "On the histogram as a density estimator: L2 theory" (PDF). Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete. 57 (4): 453–476. CiteSeerX   10.1.1.650.2473 . doi:10.1007/BF01025868.
  19. Wasserman, Larry (2004). All of Statistics. New York: Springer. p. 310. ISBN   978-1-4419-2322-6.
  20. Stone, Charles J. (1984). "An asymptotically optimal histogram selection rule" (PDF). Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer.
  21. Shimazaki, H.; Shinomoto, S. (2007). "A method for selecting the bin size of a time histogram". Neural Computation. 19 (6): 1503–1527. CiteSeerX   10.1.1.304.6404 . doi:10.1162/neco.2007.19.6.1503. PMID   17444758.
  22. Jack Prins; Don McCormack; Di Michelson; Karen Horrell. "Chi-square goodness-of-fit test". NIST/SEMATECH e-Handbook of Statistical Methods. NIST/SEMATECH. p. 7.2.1.1. Retrieved 29 March 2019.
  23. Moore, David (1986). "3". In D'Agostino, Ralph; Stephens, Michael (eds.). Goodness-of-Fit Techniques. New York, NY, USA: Marcel Dekker Inc. p. 70. ISBN   0-8247-7487-6.
  24. A calculator for probability distributions and density functions
  25. An illustration of histograms and probability density functions

Further reading