Arithmetic mean

Last updated

In mathematics and statistics, the arithmetic mean ( /ˌærɪθˈmɛtɪkˈmn/ arr-ith-MET-ik), arithmetic average, or just the mean or average (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the collection. [1] The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic.

Contents

In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population.

While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency.

Definition

Given a data set , the arithmetic mean (also mean or average), denoted (read bar), is the mean of the values . [2]

The arithmetic mean is a data set's most commonly used and readily understood measure of central tendency. In statistics, the term average refers to any measurement of central tendency. The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values , the arithmetic mean is defined by the formula:

[3]

(For an explanation of the summation operator, see summation.)

For example, if the monthly salaries of employees are , then the arithmetic mean is:

If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean and denoted by the Greek letter . If the data set is a statistical sample (a subset of the population), it is called the sample mean (which for a data set is denoted as ).

The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to ), it can be defined on a convex space, not only a vector space.

Motivating properties

The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include:

Additional properties

Contrast with median

The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger, and no more than half are smaller than it. If elements in the data increase arithmetically when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample . The mean is , as is the median. However, when we consider a sample that cannot be arranged to increase arithmetically, such as , the median and arithmetic average can differ significantly. In this case, the arithmetic average is , while the median is . The average value can vary considerably from most values in the sample and can be larger or smaller than most.

There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income. [4]

Generalizations

Weighted average

A weighted average, or weighted mean, is an average in which some data points count more heavily than others in that they are given more weight in the calculation. [5] For example, the arithmetic mean of and is , or equivalently . In contrast, a weighted mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as . Here the weights, which necessarily sum to one, are and , the former being twice the latter. The arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all weights are equal to the same number ( in the above example and in a situation with numbers being averaged).

Continuous probability distributions

Comparison of two log-normal distributions with equal median, but different skewness, resulting in various means and modes Comparison mean median mode.svg
Comparison of two log-normal distributions with equal median, but different skewness, resulting in various means and modes

If a numerical property, and any sample of data from it, can take on any value from a continuous range instead of, for example, just integers, then the probability of a number falling into some range of possible values can be described by integrating a continuous probability distribution across this range, even when the naive probability for a sample number taking one certain value from infinitely many is zero. In this context, the analog of a weighted average, in which there are infinitely many possibilities for the precise value of the variable in each range, is called the mean of the probability distribution. The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the median mentioned above and the mode (the three Ms [6] ), are equal. This equality does not hold for other probability distributions, as illustrated for the log-normal distribution here.

Angles

Particular care is needed when using cyclic data, such as phases or angles. Taking the arithmetic mean of 1° and 359° yields a result of 180°. This is incorrect for two reasons:

In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (that is, define the mean as the central point: the point about which one has the lowest dispersion) and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°).

Proof without words of the inequality of arithmetic and geometric means:

P
R
{\displaystyle PR}
is the diameter of a circle centered on
O
{\displaystyle O}
; its radius
A
O
{\displaystyle AO}
is the arithmetic mean of
a
{\displaystyle a}
and
b
{\displaystyle b}
. Using the geometric mean theorem, triangle
P
G
R
{\displaystyle PGR}
's altitude
G
Q
{\displaystyle GQ}
is the geometric mean. For any ratio
a
:
b
{\displaystyle a:b}
,
A
O
>=
G
Q
{\displaystyle AO\geq GQ}
. AM GM inequality visual proof.svg
Proof without words of the inequality of arithmetic and geometric means:
is the diameter of a circle centered on ; its radius is the arithmetic mean of and . Using the geometric mean theorem,triangle 's altitude is the geometric mean. For any ratio ,.

Symbols and encoding

The arithmetic mean is often denoted by a bar (vinculum or macron), as in . [2]

Some software (text processors, web browsers) may not display the "x̄" symbol correctly. For example, the HTML symbol "x̄" combines two codes — the base letter "x" plus a code for the line above (̄ or ¯). [7]

In some document formats (such as PDF), the symbol may be replaced by a "¢" (cent) symbol when copied to a text processor such as Microsoft Word.

See also

Geometric proof without words that max (a,b) > root mean square (RMS) or quadratic mean (QM) > arithmetic mean (AM) > geometric mean (GM) > harmonic mean (HM) > min (a,b) of two distinct positive numbers a and b QM AM GM HM inequality visual proof.svg
Geometric proof without words that max(a,b)> root mean square (RMS) or quadratic mean (QM)> arithmetic mean (AM)> geometric mean (GM)> harmonic mean (HM)>min(a,b) of two distinct positive numbers a and b

Related Research Articles

In statistics, a central tendency is a central or typical value for a probability distribution.

<span class="mw-page-title-main">Geometric mean</span> N-th root of the product of n numbers

In mathematics, the geometric mean is a mean or average which indicates a central tendency of a set of numbers by using the product of their values. The geometric mean is defined as the nth root of the product of n numbers, i.e., for a set of numbers a1, a2, ..., an, the geometric mean is defined as

In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. It is sometimes appropriate for situations when the average rate is desired.

In probability theory and statistics, kurtosis is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations.

<span class="mw-page-title-main">Median</span> Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe center of the income distribution because increases in the largest incomes alone have no effect on median. For this reason, the median is of central importance in robust statistics.

There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value of a given data set.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient ― also known as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ― is a measure of linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations, were separately used in order to compute one value of a statistic for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically.

<span class="mw-page-title-main">Moving average</span> Type of statistical measure over subsets of a dataset

In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter. Variations include: simple, cumulative, or weighted forms.

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, a and b, which are the minimum and maximum values. The interval can either be closed (e.g. [a, b]) or open (e.g. (a, b)). Therefore, the distribution is often abbreviated U (a, b), where U stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as the sum of absolute errors divided by the sample size:

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In mathematics and statistics, a circular mean or angular mean is a mean designed for angles and similar cyclic quantities, such as daytimes, and fractional parts of real numbers. This is necessary since most of the usual means may not be appropriate on angle-like quantities. For example, the arithmetic mean of 0° and 360° is 180°, which is misleading because 360° equals 0° modulo a full cycle. As another example, the "average time" between 11 PM and 1 AM is either midnight or noon, depending on whether the two times are part of a single night or part of a single calendar day. The circular mean is one of the simplest examples of circular statistics and of statistics of non-Euclidean spaces. This computation produces a different result than the arithmetic mean, with the difference being greater when the angles are widely distributed. For example, the arithmetic mean of the three angles 0°, 0° and 90° is (0+0+90)/3 = 30°, but the vector mean is 26.565°. Moreover, with the arithmetic mean the circular variance is only defined ±180°.

The sample mean and the sample covariance are statistics computed from a sample of data on one or more random variables.

In statistical theory, a U-statistic is a class of statistics that is especially important in estimation theory; the letter "U" stands for unbiased. In elementary statistics, U-statistics arise naturally in producing minimum-variance unbiased estimators.

References

  1. Jacobs, Harold R. (1994). Mathematics: A Human Endeavor (Third ed.). W. H. Freeman. p. 547. ISBN   0-7167-2426-X.
  2. 1 2 3 Medhi, Jyotiprasad (1992). Statistical Methods: An Introductory Text. New Age International. pp. 53–58. ISBN   9788122404197.
  3. Weisstein, Eric W. "Arithmetic Mean". mathworld.wolfram.com. Retrieved 21 August 2020.
  4. Krugman, Paul (4 June 2014) [Fall 1992]. "The Rich, the Right, and the Facts: Deconstructing the Income Distribution Debate". The American Prospect.
  5. "Mean | mathematics". Encyclopedia Britannica. Retrieved 21 August 2020.
  6. Thinkmap Visual Thesaurus (30 June 2010). "The Three M's of Statistics: Mode, Median, Mean June 30, 2010". www.visualthesaurus.com. Retrieved 3 December 2018.
  7. "Notes on Unicode for Stat Symbols". www.personal.psu.edu. Retrieved 14 October 2018.
  8. If AC = a and BC = b. OC = AM of a and b, and radius r = QO = OG.
    Using Pythagoras' theorem, QC² = QO² + OC² QC = QO² + OC² = QM.
    Using Pythagoras' theorem, OC² = OG² + GC² GC = OC² OG² = GM.
    Using similar triangles, HC/GC = GC/OC HC = GC²/OC = HM.

Further reading