# Weighted arithmetic mean

Last updated

The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

## Contents

If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox.

## Examples

### Basic example

Given two school classes, one with 20 students, and one with 30 students, the grades in each class on a test were:

Morning class = 62, 67, 71, 74, 76, 77, 78, 79, 79, 80, 80, 81, 81, 82, 83, 84, 86, 89, 93, 98
Afternoon class = 81, 82, 83, 84, 85, 86, 87, 87, 88, 88, 89, 89, 89, 90, 90, 90, 90, 91, 91, 91, 92, 92, 93, 93, 94, 95, 96, 97, 98, 99

The mean for the morning class is 80 and the mean of the afternoon class is 90. The unweighted mean of the two means is 85. However, this does not account for the difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students):

${\displaystyle {\bar {x}}={\frac {4300}{50}}=86.}$

Or, this can be accomplished by weighting the class means by the number of students in each class. The larger class is given more "weight":

${\displaystyle {\bar {x}}={\frac {(20\times 80)+(30\times 90)}{20+30}}=86.}$

Thus, the weighted mean makes it possible to find the mean average student grade without knowing each student's score. Only the class means and the number of students in each class are needed.

### Convex combination example

Since only the relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called a convex combination.

Using the previous example, we would get the following weights:

${\displaystyle {\frac {20}{20+30}}=0.4}$
${\displaystyle {\frac {30}{20+30}}=0.6}$

Then, apply the weights like this:

${\displaystyle {\bar {x}}=(0.4\times 80)+(0.6\times 90)=86.}$

## Mathematical definition

Formally, the weighted mean of a non-empty finite multiset of data ${\displaystyle \{x_{1},x_{2},\dots ,x_{n}\},}$ with corresponding non-negative weights ${\displaystyle \{w_{1},w_{2},\dots ,w_{n}\}}$ is

${\displaystyle {\bar {x}}={\frac {\sum \limits _{i=1}^{n}w_{i}x_{i}}{\sum \limits _{i=1}^{n}w_{i}}},}$

which expands to:

${\displaystyle {\bar {x}}={\frac {w_{1}x_{1}+w_{2}x_{2}+\cdots +w_{n}x_{n}}{w_{1}+w_{2}+\cdots +w_{n}}}.}$

Therefore, data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights cannot be negative. Some may be zero, but not all of them (since division by zero is not allowed).

The formulas are simplified when the weights are normalized such that they sum up to ${\displaystyle 1}$, i.e.:

${\displaystyle \sum _{i=1}^{n}{w_{i}'}=1}$.

For such normalized weights the weighted mean is then:

${\displaystyle {\bar {x}}=\sum _{i=1}^{n}{w_{i}'x_{i}}}$.

Note that one can always normalize the weights by making the following transformation on the original weights:

${\displaystyle w_{i}'={\frac {w_{i}}{\sum _{j=1}^{n}{w_{j}}}}}$.

Using the normalized weight yields the same results as when using the original weights:

{\displaystyle {\begin{aligned}{\bar {x}}&=\sum _{i=1}^{n}w'_{i}x_{i}=\sum _{i=1}^{n}{\frac {w_{i}}{\sum _{j=1}^{n}w_{j}}}x_{i}={\frac {\sum _{i=1}^{n}w_{i}x_{i}}{\sum _{j=1}^{n}w_{j}}}\\&={\frac {\sum _{i=1}^{n}w_{i}x_{i}}{\sum _{i=1}^{n}w_{i}}}.\end{aligned}}}

The ordinary mean ${\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}{x_{i}}}$ is a special case of the weighted mean where all data have equal weights.

The standard error of the weighted mean (unit input variances), ${\displaystyle \sigma _{\bar {x}}}$ can be shown via uncertainty propagation to be:

${\textstyle \sigma _{\bar {x}}=\left({\sqrt {\sum _{i=1}^{n}{w_{i}}}}\right)^{-1}}$

## Statistical properties

The weighted sample mean, ${\displaystyle {\bar {x}}}$, is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations, as follows. For simplicity, we assume normalized weights (weights summing to one).

If the observations have expected values

${\displaystyle E(x_{i})={\mu _{i}},}$

then the weighted sample mean has expectation

${\displaystyle E({\bar {x}})=\sum _{i=1}^{n}{w_{i}'\mu _{i}}.}$

In particular, if the means are equal, ${\displaystyle \mu _{i}=\mu }$, then the expectation of the weighted sample mean will be that value,

${\displaystyle E({\bar {x}})=\mu .}$

For uncorrelated observations with variances ${\displaystyle \sigma _{i}^{2}}$, the variance of the weighted sample mean is[ citation needed ]

${\displaystyle \sigma _{\bar {x}}^{2}=\sum _{i=1}^{n}{w_{i}'^{2}\sigma _{i}^{2}}}$

whose square root ${\displaystyle \sigma _{\bar {x}}}$ can be called the standard error of the weighted mean (general case).[ citation needed ]

Consequently, if all the observations have equal variance, ${\displaystyle \sigma _{i}^{2}=\sigma _{0}^{2}}$, the weighted sample mean will have variance

${\displaystyle \sigma _{\bar {x}}^{2}=\sigma _{0}^{2}\sum _{i=1}^{n}{w_{i}'^{2}},}$

where ${\textstyle 1/n\leq \sum _{i=1}^{n}{w_{i}'^{2}}\leq 1}$. The variance attains its maximum value, ${\displaystyle \sigma _{0}^{2}}$, when all weights except one are zero. Its minimum value is found when all weights are equal (i.e., unweighted mean), in which case we have ${\textstyle \sigma _{\bar {x}}=\sigma _{0}/{\sqrt {n}}}$, i.e., it degenerates into the standard error of the mean, squared.

Note that because one can always transform non-normalized weights to normalized weights all formula in this section can be adapted to non-normalized weights by replacing all ${\displaystyle w_{i}'={\frac {w_{i}}{\sum _{i=1}^{n}{w_{i}}}}}$.

## Variance weights

For the weighted mean of a list of data for which each element ${\displaystyle x_{i}}$ potentially comes from a different probability distribution with known variance ${\displaystyle \sigma _{i}^{2}}$, one possible choice for the weights is given by the reciprocal of variance:

${\displaystyle w_{i}={\frac {1}{\sigma _{i}^{2}}}.}$

The weighted mean in this case is:

${\displaystyle {\bar {x}}={\frac {\sum _{i=1}^{n}\left({\dfrac {x_{i}}{\sigma _{i}^{2}}}\right)}{\sum _{i=1}^{n}{\dfrac {1}{\sigma _{i}^{2}}}}},}$

and the standard error of the weighted mean (with variance weights) is:

${\displaystyle \sigma _{\bar {x}}={\sqrt {\frac {1}{\sum _{i=1}^{n}\sigma _{i}^{-2}}}},}$

Note this reduces to ${\displaystyle \sigma _{\bar {x}}^{2}=\sigma _{0}^{2}/n}$ when all ${\displaystyle \sigma _{i}=\sigma _{0}}$. It is a special case of the general formula in previous section,

${\displaystyle \sigma _{\bar {x}}^{2}=\sum _{i=1}^{n}{w_{i}'^{2}\sigma _{i}^{2}}={\frac {\sum _{i=1}^{n}{\sigma _{i}^{-4}\sigma _{i}^{2}}}{\left(\sum _{i=1}^{n}\sigma _{i}^{-2}\right)^{2}}}.}$

The equations above can be combined to obtain:

${\displaystyle {\bar {x}}=\sigma _{\bar {x}}^{2}\sum _{i=1}^{n}{\frac {x_{i}}{\sigma _{i}^{2}}}.}$

The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally distributed with the same mean.

### Correcting for over- or under-dispersion

Weighted means are typically used to find the weighted mean of historical data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that ${\displaystyle \chi ^{2}}$ is too large. The correction that must be made is

${\displaystyle {\hat {\sigma }}_{\bar {x}}^{2}=\sigma _{\bar {x}}^{2}\chi _{\nu }^{2}}$

where ${\displaystyle \chi _{\nu }^{2}}$ is the reduced chi-squared:

${\displaystyle \chi _{\nu }^{2}={\frac {1}{(n-1)}}\sum _{i=1}^{n}{\frac {(x_{i}-{\bar {x}})^{2}}{\sigma _{i}^{2}}};}$

The square root ${\displaystyle {\hat {\sigma }}_{\bar {x}}}$ can be called the standard error of the weighted mean (variance weights, scale corrected).

When all data variances are equal, ${\displaystyle \sigma _{i}=\sigma _{0}}$, they cancel out in the weighted mean variance, ${\displaystyle \sigma _{\bar {x}}^{2}}$, which again reduces to the standard error of the mean (squared), ${\displaystyle \sigma _{\bar {x}}^{2}=\sigma ^{2}/n}$, formulated in terms of the sample standard deviation (squared),

${\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}{n-1}}.}$

## Bootstrapping validation

It has been shown by bootstrapping methods that the following is an accurate estimation for the square of the standard error of the mean (general case): [1]

${\displaystyle \sigma _{\bar {x}}^{2}={\frac {n}{(n-1)w_{s}^{2}}}\left[\sum (w_{i}x_{i}-w_{s}{\bar {x}})^{2}-2{\bar {x}}\sum (w_{i}-w_{s})(w_{i}x_{i}-w_{s}{\bar {x}})+{\bar {x}}^{2}\sum (w_{i}-w_{s})^{2}\right]}$

where ${\displaystyle w_{s}=\sum w_{i}}$. Further simplification leads to

${\displaystyle \sigma _{\bar {x}}^{2}={\frac {n}{(n-1)w_{s}^{2}}}\sum w_{i}^{2}(x_{i}-{\bar {x}})^{2}}$

## Weighted sample variance

Typically when a mean is calculated it is important to know the variance and standard deviation about that mean. When a weighted mean ${\displaystyle \mu ^{*}}$ is used, the variance of the weighted sample is different from the variance of the unweighted sample.

The biased weighted sample variance ${\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}$ is defined similarly to the normal biased sample variance ${\displaystyle {\hat {\sigma }}^{2}}$:

{\displaystyle {\begin{aligned}{\hat {\sigma }}^{2}\ &={\frac {\sum \limits _{i=1}^{N}\left(x_{i}-\mu \right)^{2}}{N}}\\{\hat {\sigma }}_{\mathrm {w} }^{2}&={\frac {\sum \limits _{i=1}^{N}w_{i}\left(x_{i}-\mu ^{*}\right)^{2}}{\sum _{i=1}^{N}w_{i}}}\end{aligned}}}

where ${\displaystyle \sum _{i=1}^{N}w_{i}=1}$ for normalized weights. If the weights are frequency weights (and thus are random variables), it can be shown that ${\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}$ is the maximum likelihood estimator of ${\displaystyle \sigma ^{2}}$ for iid Gaussian observations.

For small samples, it is customary to use an unbiased estimator for the population variance. In normal unweighted samples, the N in the denominator (corresponding to the sample size) is changed to N  1 (see Bessel's correction). In the weighted setting, there are actually two different unbiased estimators, one for the case of frequency weights and another for the case of reliability weights.

### Frequency weights

If the weights are frequency weights (where a weight equals the number of occurrences), then the unbiased estimator is:

{\displaystyle {\begin{aligned}s^{2}\ &={\frac {\sum \limits _{i=1}^{N}w_{i}\left(x_{i}-\mu ^{*}\right)^{2}}{\sum _{i=1}^{N}w_{i}-1}}\end{aligned}}}

This effectively applies Bessel's correction for frequency weights.

For example, if values ${\displaystyle \{2,2,4,5,5,5\}}$ are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample ${\displaystyle \{2,4,5\}}$ with corresponding weights ${\displaystyle \{2,1,3\}}$, and we get the same result either way.

If the frequency weights ${\displaystyle \{w_{i}\}}$ are normalized to 1, then the correct expression after Bessel's correction becomes

{\displaystyle {\begin{aligned}s^{2}\ &={\frac {\sum _{i=1}^{N}w_{i}}{\sum _{i=1}^{N}w_{i}-1}}\sum _{i=1}^{N}w_{i}\left(x_{i}-\mu ^{*}\right)^{2}\end{aligned}}}

where the total number of samples is ${\displaystyle \sum _{i=1}^{N}w_{i}}$ (not ${\displaystyle N}$). In any case, the information on total number of samples is necessary in order to obtain an unbiased correction, even if ${\displaystyle w_{i}}$ has a different meaning other than frequency weight.

Note that the estimator can be unbiased only if the weights are not standardized nor normalized, these processes changing the data's mean and variance and thus leading to a loss of the base rate (the population count, which is a requirement for Bessel's correction).

### Reliability weights

If the weights are instead non-random (reliability weights[ definition needed ]), we can determine a correction factor to yield an unbiased estimator. Assuming each random variable is sampled from the same distribution with mean ${\displaystyle \mu }$ and actual variance ${\displaystyle \sigma _{\text{actual}}^{2}}$, taking expectations we have,

{\displaystyle {\begin{aligned}\operatorname {E} [{\hat {\sigma }}^{2}]&={\frac {\sum \limits _{i=1}^{N}\operatorname {E} [(x_{i}-\mu )^{2}]}{N}}\\&=\operatorname {E} [(X-\operatorname {E} [X])^{2}]-{\frac {1}{N}}\operatorname {E} [(X-\operatorname {E} [X])^{2}]\\&=\left({\frac {N-1}{N}}\right)\sigma _{\text{actual}}^{2}\\\operatorname {E} [{\hat {\sigma }}_{\mathrm {w} }^{2}]&={\frac {\sum \limits _{i=1}^{N}w_{i}\operatorname {E} [(x_{i}-\mu ^{*})^{2}]}{V_{1}}}\\&=\operatorname {E} [(X-\operatorname {E} [X])^{2}]-{\frac {V_{2}}{V_{1}^{2}}}\operatorname {E} [(X-\operatorname {E} [X])^{2}]\\&=\left(1-{\frac {V_{2}}{V_{1}^{2}}}\right)\sigma _{\text{actual}}^{2}\end{aligned}}}

where ${\displaystyle V_{1}=\sum _{i=1}^{N}w_{i}}$ and ${\displaystyle V_{2}=\sum _{i=1}^{N}w_{i}^{2}}$. Therefore, the bias in our estimator is ${\displaystyle \left(1-{\frac {V_{2}}{V_{1}^{2}}}\right)}$, analogous to the ${\displaystyle \left({\frac {N-1}{N}}\right)}$ bias in the unweighted estimator (also notice that ${\displaystyle \ V_{1}^{2}/V_{2}=N_{eff}}$ is the effective sample size). This means that to unbias our estimator we need to pre-divide by ${\displaystyle 1-\left(V_{2}/V_{1}^{2}\right)}$, ensuring that the expected value of the estimated variance equals the actual variance of the sampling distribution.

The final unbiased estimate of sample variance is:

{\displaystyle {\begin{aligned}s_{\mathrm {w} }^{2}\ &={\frac {{\hat {\sigma }}_{\mathrm {w} }^{2}}{1-(V_{2}/V_{1}^{2})}}\\&={\frac {\sum \limits _{i=1}^{N}w_{i}(x_{i}-\mu ^{*})^{2}}{V_{1}-(V_{2}/V_{1})}}\end{aligned}}}, [2]

where ${\displaystyle \operatorname {E} [s_{\mathrm {w} }^{2}]=\sigma _{\text{actual}}^{2}}$.

The degrees of freedom of the weighted, unbiased sample variance vary accordingly from N  1 down to 0.

The standard deviation is simply the square root of the variance above.

As a side note, other approaches have been described to compute the weighted sample variance. [3]

## Weighted sample covariance

In a weighted sample, each row vector ${\displaystyle \textstyle {\textbf {x}}_{i}}$ (each set of single observations on each of the K random variables) is assigned a weight ${\displaystyle \textstyle w_{i}\geq 0}$.

Then the weighted mean vector ${\displaystyle \textstyle \mathbf {\mu ^{*}} }$ is given by

${\displaystyle \mathbf {\mu ^{*}} ={\frac {\sum _{i=1}^{N}w_{i}\mathbf {x} _{i}}{\sum _{i=1}^{N}w_{i}}}.}$

And the weighted covariance matrix is given by: [4]

{\displaystyle {\begin{aligned}\mathbf {C} &={\frac {\sum _{i=1}^{N}w_{i}\left(\mathbf {x} _{i}-\mu ^{*}\right)^{T}\left(\mathbf {x} _{i}-\mu ^{*}\right)}{V_{1}}}.\end{aligned}}}

Similarly to weighted sample variance, there are two different unbiased estimators depending on the type of the weights.

### Frequency weights

If the weights are frequency weights, the unbiased weighted estimate of the covariance matrix ${\displaystyle \textstyle \mathbf {C} }$, with Bessel's correction, is given by: [4]

{\displaystyle {\begin{aligned}\mathbf {C} &={\frac {\sum _{i=1}^{N}w_{i}\left(\mathbf {x} _{i}-\mu ^{*}\right)^{T}\left(\mathbf {x} _{i}-\mu ^{*}\right)}{V_{1}-1}}.\end{aligned}}}

Note that this estimator can be unbiased only if the weights are not standardized nor normalized, these processes changing the data's mean and variance and thus leading to a loss of the base rate (the population count, which is a requirement for Bessel's correction).

### Reliability weights

In the case of reliability weights, the weights are normalized:

${\displaystyle V_{1}=\sum _{i=1}^{N}w_{i}=1.}$

(If they are not, divide the weights by their sum to normalize prior to calculating ${\displaystyle V_{1}}$:

${\displaystyle w_{i}'={\frac {w_{i}}{\sum _{i=1}^{N}w_{i}}}}$

Then the weighted mean vector ${\displaystyle \textstyle \mathbf {\mu ^{*}} }$ can be simplified to

${\displaystyle \mathbf {\mu ^{*}} =\sum _{i=1}^{N}w_{i}\mathbf {x} _{i}.}$

and the unbiased weighted estimate of the covariance matrix ${\displaystyle \textstyle \mathbf {C} }$ is: [5]

{\displaystyle {\begin{aligned}\mathbf {C} &={\frac {\sum _{i=1}^{N}w_{i}}{\left(\sum _{i=1}^{N}w_{i}\right)^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\sum _{i=1}^{N}w_{i}\left(\mathbf {x} _{i}-\mu ^{*}\right)^{T}\left(\mathbf {x} _{i}-\mu ^{*}\right)\\&={\frac {\sum _{i=1}^{N}w_{i}\left(\mathbf {x} _{i}-\mu ^{*}\right)^{T}\left(\mathbf {x} _{i}-\mu ^{*}\right)}{V_{1}-(V_{2}/V_{1})}}.\end{aligned}}}

The reasoning here is the same as in the previous section.

Since we are assuming the weights are normalized, then ${\displaystyle V_{1}=1}$ and this reduces to:

${\displaystyle \mathbf {C} ={\frac {\sum _{i=1}^{N}w_{i}\left(\mathbf {x} _{i}-\mu ^{*}\right)^{T}\left(\mathbf {x} _{i}-\mu ^{*}\right)}{1-V_{2}}}.}$

If all weights are the same, i.e. ${\displaystyle \textstyle w_{i}/V_{1}=1/N}$, then the weighted mean and covariance reduce to the unweighted sample mean and covariance above.

## Vector-valued estimates

The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. We simply replace the variance ${\displaystyle \sigma ^{2}}$ by the covariance matrix ${\displaystyle \mathbf {C} }$ and the arithmetic inverse by the matrix inverse (both denoted in the same way, via superscripts); the weight matrix then reads: [6]

${\displaystyle \mathbf {W} _{i}=\mathbf {C} _{i}^{-1}.}$

The weighted mean in this case is:

${\displaystyle {\bar {\mathbf {x} }}=\mathbf {C} _{\bar {\mathbf {x} }}\left(\sum _{i=1}^{n}\mathbf {W} _{i}\mathbf {x} _{i}\right),}$

(where the order of the matrix-vector product is not commutative), in terms of the covariance of the weighted mean:

${\displaystyle \mathbf {C} _{\bar {\mathbf {x} }}=\left(\sum _{i=1}^{n}\mathbf {W} _{i}\right)^{-1},}$

For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then

${\displaystyle \mathbf {x} _{1}:={\begin{bmatrix}1&0\end{bmatrix}}^{\top },\qquad \mathbf {C} _{1}:={\begin{bmatrix}1&0\\0&100\end{bmatrix}}}$
${\displaystyle \mathbf {x} _{2}:={\begin{bmatrix}0&1\end{bmatrix}}^{\top },\qquad \mathbf {C} _{2}:={\begin{bmatrix}100&0\\0&1\end{bmatrix}}}$

then the weighted mean is:

{\displaystyle {\begin{aligned}{\bar {\mathbf {x} }}&=\left(\mathbf {C} _{1}^{-1}+\mathbf {C} _{2}^{-1}\right)^{-1}\left(\mathbf {C} _{1}^{-1}\mathbf {x} _{1}+\mathbf {C} _{2}^{-1}\mathbf {x} _{2}\right)\\[5pt]&={\begin{bmatrix}0.9901&0\\0&0.9901\end{bmatrix}}{\begin{bmatrix}1\\1\end{bmatrix}}={\begin{bmatrix}0.9901\\0.9901\end{bmatrix}}\end{aligned}}}

which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1].

## Accounting for correlations

In the general case, suppose that ${\displaystyle \mathbf {X} =[x_{1},\dots ,x_{n}]^{T}}$, ${\displaystyle \mathbf {C} }$ is the covariance matrix relating the quantities ${\displaystyle x_{i}}$, ${\displaystyle {\bar {x}}}$ is the common mean to be estimated, and ${\displaystyle \mathbf {J} }$ is a design matrix equal to a vector of ones ${\displaystyle [1,...,1]^{T}}$ (of length ${\displaystyle n}$). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:

${\displaystyle \sigma _{\bar {x}}^{2}=(\mathbf {J} ^{T}\mathbf {W} \mathbf {J} )^{-1},}$

and

${\displaystyle {\bar {x}}=\sigma _{\bar {x}}^{2}(\mathbf {J} ^{T}\mathbf {W} \mathbf {X} ),}$

where:

${\displaystyle \mathbf {W} =\mathbf {C} ^{-1}.}$

## Decreasing strength of interactions

Consider the time series of an independent variable ${\displaystyle x}$ and a dependent variable ${\displaystyle y}$, with ${\displaystyle n}$ observations sampled at discrete times ${\displaystyle t_{i}}$. In many common situations, the value of ${\displaystyle y}$ at time ${\displaystyle t_{i}}$ depends not only on ${\displaystyle x_{i}}$ but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean ${\displaystyle z}$ for a window size ${\displaystyle m}$.

${\displaystyle z_{k}=\sum _{i=1}^{m}w_{i}x_{k+1-i}.}$

## Exponentially decreasing weights

In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction ${\displaystyle 0<\Delta <1}$ at each time step. Setting ${\displaystyle w=1-\Delta }$ we can define ${\displaystyle m}$ normalized weights by

${\displaystyle w_{i}={\frac {w^{i-1}}{V_{1}}},}$

where ${\displaystyle V_{1}}$ is the sum of the unnormalized weights. In this case ${\displaystyle V_{1}}$ is simply

${\displaystyle V_{1}=\sum _{i=1}^{m}{w^{i-1}}={\frac {1-w^{m}}{1-w}},}$

approaching ${\displaystyle V_{1}=1/(1-w)}$ for large values of ${\displaystyle m}$.

The damping constant ${\displaystyle w}$ must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step ${\displaystyle (1-w)^{-1}}$, the weight approximately equals ${\displaystyle {e^{-1}}(1-w)=0.39(1-w)}$, the tail area the value ${\displaystyle e^{-1}}$, the head area ${\displaystyle {1-e^{-1}}=0.61}$. The tail area at step ${\displaystyle n}$ is ${\displaystyle \leq {e^{-n(1-w)}}}$. Where primarily the closest ${\displaystyle n}$ observations matter and the effect of the remaining observations can be ignored safely, then choose ${\displaystyle w}$ such that the tail area is sufficiently small.

## Weighted averages of functions

The concept of weighted average can be extended to functions. [7] Weighted averages of functions play an important role in the systems of weighted differential and integral calculus. [8]

## Related Research Articles

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. In other words, it measures how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally-distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values, the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other,, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances.

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has normal distribution, the sample covariance matrix has Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

The sample mean and the sample covariance are statistics computed from a sample of data on one or more random variables.

In probability theory and statistics, the normal-gamma distribution is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.

In statistics, pooled variance is a method for estimating variance of several different populations when the mean of each population may be different, but one may assume that the variance of each population is the same. The numerical estimate resulting from the use of this method is also called the pooled variance.

In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.

In probability and statistics, the class of exponential dispersion models (EDM) is a set of probability distributions that represents a generalisation of the natural exponential family. Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference.

In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution to more than two outcomes.

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance, i.e. proportional to its precision.

In statistics, effective sample size is a notion defined for a sample from a distribution when the observations in the sample are correlated or weighted.

## References

1. Gatz, Donald F.; Smith, Luther (June 1995). "The standard error of a weighted mean concentration—I. Bootstrapping vs other methods". Atmospheric Environment. 29 (11): 1185–1193. doi:10.1016/1352-2310(94)00210-C.
2. "GNU Scientific Library – Reference Manual: Weighted Samples". Gnu.org. Retrieved 22 December 2017.
3. "Weighted Standard Error and its Impact on Significance Testing (WinCross vs. Quantum & SPSS), Dr. Albert Madansky" (PDF). Analyticalgroup.com. Retrieved 22 December 2017.
4. Price, George R. (April 1972). "Extension of covariance selection mathematics" (PDF). Annals of Human Genetics. 35 (4): 485–490. doi:10.1111/j.1469-1809.1957.tb01874.x.
5. Mark Galassi, Jim Davies, James Theiler, Brian Gough, Gerard Jungman, Michael Booth, and Fabrice Rossi. GNU Scientific Library - Reference manual, Version 1.15, 2011. Sec. 21.7 Weighted Samples
6. James, Frederick (2006). Statistical Methods in Experimental Physics (2nd ed.). Singapore: World Scientific. p. 324. ISBN   981-270-527-9.
7. G. H. Hardy, J. E. Littlewood, and G. Pólya. Inequalities (2nd ed.), Cambridge University Press, ISBN   978-0-521-35880-4, 1988.
8. Jane Grossman, Michael Grossman, Robert Katz. The First Systems of Weighted Differential and Integral Calculus, ISBN   0-9771170-1-4, 1980.
• Bevington, Philip R (1969). Data Reduction and Error Analysis for the Physical Sciences. New York, N.Y.: McGraw-Hill. OCLC   300283069.
• Strutz, T. (2010). Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Vieweg+Teubner. ISBN   978-3-8348-1022-9.