# Mean

Last updated

There are several kinds of mean in mathematics, especially in statistics:

## Contents

For a data set, the arithmetic mean, also known as average or arithmetic average, is a central value of a finite set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted by ${\displaystyle {\bar {x}}}$ [note 1] . If the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is the sample mean (denoted ${\displaystyle {\bar {x}}}$) to distinguish it from the mean, or expected value, of the underlying distribution, the population mean (denoted ${\displaystyle \mu }$ or ${\displaystyle \mu _{x}}$ [note 2] ). [1] [2]

In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of a random variable characterized by that distribution. [3] In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving ${\displaystyle \mu =\sum xp(x)....}$. [4] [5] An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions.

For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual—divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean. [6]

Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below.

## Types of means

### Pythagorean means

#### Arithmetic mean (AM)

The arithmetic mean (or simply mean) of a list of numbers, is the sum of all of the numbers divided by the number of numbers. Similarly, the mean of a sample ${\displaystyle x_{1},x_{2},\ldots ,x_{n}}$, usually denoted by ${\displaystyle {\bar {x}}}$, [1] is the sum of the sampled values divided by the number of items in the sample

${\displaystyle {\bar {x}}={\frac {1}{n}}\left(\sum _{i=1}^{n}{x_{i}}\right)={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}}$

For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is:

${\displaystyle {\frac {4+36+45+50+75}{5}}={\frac {210}{5}}=42.}$

#### Geometric mean (GM)

The geometric mean is an average that is useful for sets of positive numbers, that are interpreted according to their product (as is the case with rates of growth) and not their sum (as is the case with the arithmetic mean):

${\displaystyle {\bar {x}}=\left(\prod _{i=1}^{n}{x_{i}}\right)^{\frac {1}{n}}=\left(x_{1}x_{2}\cdots x_{n}\right)^{\frac {1}{n}}}$ [7]

For example, the geometric mean of five values: 4, 36, 45, 50, 75 is:

${\displaystyle (4\times 36\times 45\times 50\times 75)^{\frac {1}{5}}={\sqrt[{5}]{24\;300\;000}}=30.}$

#### Harmonic mean (HM)

The harmonic mean is an average which is useful for sets of numbers which are defined in relation to some unit, as in the case of speed (i.e., distance per unit of time):

${\displaystyle {\bar {x}}=n\left(\sum _{i=1}^{n}{\frac {1}{x_{i}}}\right)^{-1}}$

For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is

${\displaystyle {\frac {5}{{\tfrac {1}{4}}+{\tfrac {1}{36}}+{\tfrac {1}{45}}+{\tfrac {1}{50}}+{\tfrac {1}{75}}}}={\frac {5}{\;{\tfrac {1}{3}}\;}}=15.}$

#### Relationship between AM, GM, and HM

AM, GM, and HM satisfy these inequalities:

${\displaystyle \mathrm {AM} \geq \mathrm {GM} \geq \mathrm {HM} \,}$

Equality holds if all the elements of the given sample are equal.

### Statistical location

In descriptive statistics, the mean may be confused with the median, mode or mid-range, as any of these may be called an "average" (more formally, a measure of central tendency). The mean of a set of observations is the arithmetic average of the values; however, for skewed distributions, the mean is not necessarily the same as the middle value (median), or the most likely value (mode). For example, mean income is typically skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions.

### Mean of a probability distribution

The mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. If the random variable is denoted by ${\displaystyle X}$, then it is also known as the expected value of ${\displaystyle X}$ (denoted ${\displaystyle E(X)}$). [1] For a discrete probability distribution, the mean is given by ${\displaystyle \textstyle \sum xP(x)}$, where the sum is taken over all possible values of the random variable and ${\displaystyle P(x)}$ is the probability mass function. For a continuous distribution, the mean is ${\displaystyle \textstyle \int _{-\infty }^{\infty }xf(x)\,dx}$, where ${\displaystyle f(x)}$ is the probability density function. [5] In all cases, including those in which the distribution is neither discrete nor continuous, the mean is the Lebesgue integral of the random variable with respect to its probability measure. The mean need not exist or be finite; for some probability distributions the mean is infinite (+ or ), while for others the mean is undefined.

### Generalized means

#### Power mean

The generalized mean, also known as the power mean or Hölder mean, is an abstraction of the quadratic, arithmetic, geometric and harmonic means. It is defined for a set of n positive numbers xi by

${\displaystyle {\bar {x}}(m)=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{m}\right)^{\frac {1}{m}}}$ [7]

By choosing different values for the parameter m, the following types of means are obtained:

 ${\displaystyle m\rightarrow \infty }$ maximum of ${\displaystyle x_{i}}$ ${\displaystyle m=2}$ quadratic mean ${\displaystyle m=1}$ arithmetic mean ${\displaystyle m\rightarrow 0}$ geometric mean ${\displaystyle m=-1}$ harmonic mean ${\displaystyle m\rightarrow -\infty }$ minimum of ${\displaystyle x_{i}}$

#### f-mean

This can be generalized further as the generalized f-mean

${\displaystyle {\bar {x}}=f^{-1}\left({{\frac {1}{n}}\sum _{i=1}^{n}{f\left(x_{i}\right)}}\right)}$

and again a suitable choice of an invertible f will give

 ${\displaystyle f(x)=x}$ arithmetic mean, ${\displaystyle f(x)={\frac {1}{x}}}$ harmonic mean, ${\displaystyle f(x)=x^{m}}$ power mean, ${\displaystyle f(x)=\ln(x)}$ geometric mean.

### Weighted arithmetic mean

The weighted arithmetic mean (or weighted average) is used if one wants to combine average values from different sized samples of the same population:

${\displaystyle {\bar {x}}={\frac {\sum _{i=1}^{n}{w_{i}{\bar {x_{i}}}}}{\sum _{i=1}^{n}w_{i}}}.}$ [7]

Where ${\displaystyle {\bar {x_{i}}}}$ and ${\displaystyle w_{i}}$ are the mean and size of sample ${\displaystyle i}$ respectively. In other applications, they represent a measure for the reliability of the influence upon the mean by the respective values.

### Truncated mean

Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean. It involves discarding given parts of the data at the top or the bottom end, typically an equal amount at each end and then taking the arithmetic mean of the remaining data. The number of values removed is indicated as a percentage of the total number of values.

### Interquartile mean

The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values.

${\displaystyle {\bar {x}}={\frac {2}{n}}\;\sum _{i={\frac {n}{4}}+1}^{{\frac {3}{4}}n}\!\!x_{i}}$

assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights.

### Mean of a function

In some circumstances, mathematicians may calculate a mean of an infinite (or even an uncountable) set of values. This can happen when calculating the mean value ${\displaystyle y_{\text{avg}}}$ of a function ${\displaystyle f(x)}$. Intuitively, a mean of a function can be thought of as calculating the area under a section of a curve, and then dividing by the length of that section. This can be done crudely by counting squares on graph paper, or more precisely by integration. The integration formula is written as:

${\displaystyle y_{\text{avg}}(a,b)={\frac {1}{b-a}}\int \limits _{a}^{b}\!f(x)\,dx}$

In this case, care must be taken to make sure that the integral converges. But the mean may be finite even if the function itself tends to infinity at some points.

### Mean of angles and cyclical quantities

Angles, times of day, and other cyclical quantities require modular arithmetic to add and otherwise combine numbers. In all these situations, there will not be a unique mean. For example, the times an hour before and after midnight are equidistant to both midnight and noon. It is also possible that no mean exists. Consider a color wheel—there is no mean to the set of all colors. In these situations, you must decide which mean is most useful. You can do this by adjusting the values before averaging, or by using a specialized approach for the mean of circular quantities.

### Fréchet mean

The Fréchet mean gives a manner for determining the "center" of a mass distribution on a surface or, more generally, Riemannian manifold. Unlike many other means, the Fréchet mean is defined on a space whose elements cannot necessarily be added together or multiplied by scalars. It is sometimes also known as the Karcher mean (named after Hermann Karcher).

### Swanson's rule

This is an approximation to the mean for a moderately skewed distribution. [9] It is used in hydrocarbon exploration and is defined as

${\displaystyle m=0.3P_{10}+0.4P_{50}+0.3P_{90}}$

where P10, P50 and P90 10th, 50th and 90th percentiles of the distribution.

## Distribution of the sample mean

The arithmetic mean of a population, or population mean, is often denoted μ. [1] The sample mean ${\displaystyle {\bar {x}}}$ (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of n independent observations, the expected value of the sample mean is

${\displaystyle \operatorname {E} ({\bar {x}})=\mu }$

and the variance of the sample mean is

${\displaystyle \operatorname {var} ({\bar {x}})={\frac {\sigma ^{2}}{n}}.}$

If the population is normally distributed, then the sample mean is normally distributed as follows:

${\displaystyle {\bar {x}}\thicksim N\left\{\mu ,{\frac {\sigma ^{2}}{n}}\right\}.}$

If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and σ2/n < +∞. This is a consequence of the central limit theorem.

## Notes

1. Pronounced "x bar".
2. Greek letter μ, for "mean", pronounced /'mjuː/.

## Related Research Articles

In mathematics and statistics, the arithmetic mean, or simply the mean or the average, is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results of an experiment or an observational study, or frequently a set of results from a survey. The term "arithmetic mean" is preferred in some contexts in mathematics and statistics, because it helps distinguish it from other means, such as the geometric mean and the harmonic mean.

In probability theory, the expected value of a random variable , denoted or , is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.

In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. Typically, it is appropriate for situations when the average of rates is desired.

In probability theory and statistics, kurtosis is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes the shape of a probability distribution and there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. Different measures of kurtosis may have different interpretations.

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. In other words, it measures how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally-distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

In statistics, the Pearson correlation coefficient is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

In mathematics, the moments of a function are quantitative measures related to the shape of the function's graph. If the function represents mass, then the first moment is the center of the mass, and the second moment is the rotational inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

The mean absolute difference (univariate) is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean absolute difference, which is the mean absolute difference divided by the arithmetic mean, and equal to twice the Gini coefficient. The mean absolute difference is also known as the absolute mean difference and the Gini mean difference (GMD). The mean absolute difference is sometimes denoted by Δ or as MD.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

The sample mean and the sample covariance are statistics computed from a sample of data on one or more random variables.

## References

1. "List of Probability and Statistics Symbols". Math Vault. 2020-04-26. Retrieved 2020-08-21.
2. Underhill, L.G.; Bradfield d. (1998) Introstat, Juta and Company Ltd. ISBN   0-7021-3838-X p. 181
3. Feller, William (1950). Introduction to Probability Theory and its Applications, Vol I. Wiley. p. 221. ISBN   0471257087.
4. Elementary Statistics by Robert R. Johnson and Patricia J. Kuby, p. 279
5. Weisstein, Eric W. "Population Mean". mathworld.wolfram.com. Retrieved 2020-08-21.
6. Schaum's Outline of Theory and Problems of Probability by Seymour Lipschutz and Marc Lipson, p. 141
7. "Mean | mathematics". Encyclopedia Britannica. Retrieved 2020-08-21.
8. "AP Statistics Review - Density Curves and the Normal Distributions". Archived from the original on 2 April 2015. Retrieved 16 March 2015.
9. Hurst A, Brown GC, Swanson RI (2000) Swanson's 30-40-30 Rule. American Association of Petroleum Geologists Bulletin 84(12) 1883-1891