# Standard error

Last updated

Thestandard error (SE) [1] of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution [2] or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). [1]

## Contents

The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance. Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean.

Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. [1] In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean.

In regression analysis, the term "standard error" refers either to the square root of the reduced chi-squared statistic, or the standard error for a particular regression coefficient (as used in, say, confidence intervals).

## Standard error of the mean

### Exact Value

If a statistically independent sample of ${\displaystyle n}$ observations ${\displaystyle x_{1},x_{2},\ldots ,x_{n}}$ is taken from a statistical population with a standard deviation of ${\displaystyle \sigma }$, then the mean value calculated from the sample ${\displaystyle {\bar {x}}}$ will have an associated standard error on the mean${\displaystyle {\sigma }_{\bar {x}}}$ given by: [1]

${\displaystyle {\sigma }_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}}$.

Practically this tells us that when trying to estimate the value of a population mean, due to the factor ${\displaystyle 1/{\sqrt {n}}}$, reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations.

### Estimate

The standard deviation ${\displaystyle \sigma }$ of the population being sampled is seldom known. Therefore, the standard error of the mean is usually estimated by replacing ${\displaystyle \sigma }$ with the sample standard deviation ${\displaystyle \sigma _{x}}$ instead:

${\displaystyle {\sigma }_{\bar {x}}\ \approx {\frac {\sigma _{x}}{\sqrt {n}}}}$.

As this is only an estimator for the true "standard error", it is common to see other notations here such as:

${\displaystyle {\widehat {\sigma }}_{\bar {x}}={\frac {\sigma _{x}}{\sqrt {n}}}}$ or alternately ${\displaystyle {s}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}}$.

A common source of confusion occurs when failing to distinguish clearly between the standard deviation of the population (${\displaystyle \sigma }$), the standard deviation of the sample (${\displaystyle \sigma _{x}}$), the standard deviation of the mean itself (${\displaystyle \sigma _{\bar {x}}}$, which is the standard error), and the estimator of the standard deviation of the mean (${\displaystyle {\widehat {\sigma }}_{\bar {x}}}$, which is the most often calculated quantity, and is also often colloquially called the standard error).

#### Accuracy of the estimator

When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect. [3] Sokal and Rohlf (1981) give an equation of the correction factor for small samples of n < 20. [4] See unbiased estimation of standard deviation for further discussion.

### Derivation

The standard error on the mean may be derived from the variance of a sum of independent random variables, [5] given the definition of variance and some simple properties thereof. If ${\displaystyle x_{1},x_{2},\ldots ,x_{n}}$ are ${\displaystyle n}$ independent observations from a population with mean ${\displaystyle {\bar {x}}}$ and standard deviation ${\displaystyle \sigma }$, then we can define the total

${\displaystyle T=(x_{1}+x_{2}+\cdots +x_{n})}$

which due to the Bienaymé formula, will have variance

${\displaystyle \operatorname {Var} (T)={\big (}\operatorname {Var} (x_{1})+\operatorname {Var} (x_{2})+\cdots +\operatorname {Var} (x_{n}){\big )}=n\sigma ^{2}.}$

The mean of these measurements ${\displaystyle {\bar {x}}}$ is simply given by

${\displaystyle {\bar {x}}=T/n}$.

The variance of the mean is then

${\displaystyle \operatorname {Var} ({\bar {x}})=\operatorname {Var} \left({\frac {T}{n}}\right)={\frac {1}{n^{2}}}\operatorname {Var} (T)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.}$

The standard error is, by definition, the standard deviation of ${\displaystyle {\bar {x}}}$ which is simply the square root of the variance:

${\displaystyle \sigma _{\bar {x}}={\sqrt {\frac {\sigma ^{2}}{n}}}={\frac {\sigma }{\sqrt {n}}}}$.

For correlated random variables the sample variance needs to be computed according to the Markov chain central limit theorem.

### Independent and identically distributed random variables with random sample size

There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size ${\displaystyle N}$ is a random variable whose variation adds to the variation of ${\displaystyle X}$ such that,

${\displaystyle \operatorname {Var} (T)=\operatorname {E} (N)\operatorname {Var} (X)+\operatorname {Var} (N){\big (}\operatorname {E} (X){\big )}^{2}}$ [6]

If ${\displaystyle N}$ has a Poisson distribution , then ${\displaystyle \operatorname {E} (N)=\operatorname {Var} (N)}$ with estimator ${\displaystyle N=n}$. Hence the estimator of ${\displaystyle \operatorname {Var} (T)}$ becomes ${\displaystyle nS_{X}^{2}+n{\bar {X}}^{2}}$, leading the following formula for standard error:

${\displaystyle \operatorname {Standard~Error} ({\bar {X}})={\sqrt {\frac {S_{X}^{2}+{\bar {X}}^{2}}{n}}}}$

(since the standard deviation is the square root of the variance)

## Student approximation when σ value is unknown

In many practical applications, the true value of σ is unknown. As a result, we need to use a distribution that takes into account that spread of possible σ's. When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. The standard error is the standard deviation of the Student t-distribution. T-distributions are slightly different from Gaussian, and vary depending on the size of the sample. Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence intervals.

Note: The Student's probability distribution is approximated well by the Gaussian distribution when the sample size is over 100. For such samples one can use the latter distribution, which is much simpler.

## Assumptions and usage

An example of how ${\displaystyle \operatorname {SE} }$ is used is to make confidence intervals of the unknown population mean. If the sampling distribution is normally distributed, the sample mean, the standard error, and the quantiles of the normal distribution can be used to calculate confidence intervals for the true population mean. The following expressions can be used to calculate the upper and lower 95% confidence limits, where ${\displaystyle {\bar {x}}}$ is equal to the sample mean, ${\displaystyle \operatorname {SE} }$ is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution:

Upper 95% limit ${\displaystyle ={\bar {x}}+(\operatorname {SE} \times 1.96),}$ and
Lower 95% limit ${\displaystyle ={\bar {x}}-(\operatorname {SE} \times 1.96).}$

In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE.

Standard errors provide simple measures of uncertainty in a value and are often used because:

### Standard error of mean versus standard deviation

In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process. The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem. [7]

Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. [8] If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases.

## Extensions

### Finite population correction (FPC)

The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered to be effectively infinite in size. This is usually the case even with finite populations, because most of the time, people are primarily interested in managing the processes that created the existing finite population; this is called an analytic study, following W. Edwards Deming. If people are interested in managing an existing finite population that will not change over time, then it is necessary to adjust for the population size; this is called an enumerative study.

When the sampling fraction (often termed f) is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a ''finite population correction'' (a.k.a.: fpc): [9] [10]

${\displaystyle \operatorname {FPC} ={\sqrt {\frac {N-n}{N-1}}}}$

which, for large N:

${\displaystyle \operatorname {FPC} \approx {\sqrt {1-{\frac {n}{N}}}}={\sqrt {1-f}}}$

to account for the added precision gained by sampling close to a larger percentage of the population. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N.

This happens in survey methodology when sampling without replacement. If sampling with replacement, then FPC does not come into play.

### Correction for correlation in the sample

If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x, an unbiased estimate of the true standard error of the mean (actually a correction on the standard deviation part) may be obtained by multiplying the calculated standard error of the sample by the factor f:

${\displaystyle f={\sqrt {\frac {1+\rho }{1-\rho }}},}$

where the sample bias coefficient ρ is the widely used Prais–Winsten estimate of the autocorrelation-coefficient (a quantity between −1 and +1) for all sample point pairs. This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall Street stock quotes. Moreover, this formula works for positive and negative ρ alike. [11] See also unbiased estimation of standard deviation for more discussion.

## Related Research Articles

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.

The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a survey of the entire population. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive variance, which is to say, the measure varies.

A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-tests test the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value which makes it more convenient than the Student's t-test whose critical values are defined by the sample size.

In statistics, an effect size is a number measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest, and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals.

In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations, were separately used in order to compute one value of a statistic for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically.

In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's t-statistic, with the estimate of error varying between points.

Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complicated studies there may be several different sample sizes: for example, in a stratified survey there would be different sizes for each stratum. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.

In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.

In statistics, pooled variance is a method for estimating variance of several different populations when the mean of each population may be different, but one may assume that the variance of each population is the same. The numerical estimate resulting from the use of this method is also called the pooled variance.

In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.

In linear regression, mean response and predicted response are values of the dependent variable calculated from the regression parameters and a given value of the independent variable. The values of these two responses are the same, but their calculated variances are different.

## References

1. Altman, Douglas G; Bland, J Martin (2005-10-15). "Standard deviations and standard errors". BMJ : British Medical Journal. 331 (7521): 903. doi:10.1136/bmj.331.7521.903. ISSN   0959-8138. PMC  . PMID   16223828.
2. Everitt, B. S. (2003). The Cambridge Dictionary of Statistics. CUP. ISBN   978-0-521-81099-9.
3. Gurland, J; Tripathi RC (1971). "A simple approximation for unbiased estimation of the standard deviation". American Statistician. 25 (4): 30–32. doi:10.2307/2682923. JSTOR   2682923.
4. Sokal; Rohlf (1981). (2nd ed.). p.  53. ISBN   978-0-7167-1254-1.
5. Hutchinson, T. P. (1993). Essentials of Statistical Methods, in 41 pages. Adelaide: Rumsby. ISBN   978-0-646-12621-0.
6. Cornell, J R, and Benjamin, C A, Probability, Statistics, and Decisions for Civil Engineers, McGraw-Hill, NY, 1970, ISBN   0486796094, pp. 178–9.
7. Barde, M. (2012). "What to use to express the variability of data: Standard deviation or standard error of mean?". Perspect. Clin. Res. 3 (3): 113–116. doi:10.4103/2229-3485.100662. PMC  . PMID   23125963.
8. Wassertheil-Smoller, Sylvia (1995). Biostatistics and Epidemiology : A Primer for Health Professionals (Second ed.). New York: Springer. pp. 40–43. ISBN   0-387-94388-9.
9. Isserlis, L. (1918). "On the value of a mean as calculated from a sample". Journal of the Royal Statistical Society . 81 (1): 75–81. doi:10.2307/2340569. JSTOR   2340569. (Equation 1)
10. Bondy, Warren; Zlot, William (1976). "The Standard Error of the Mean and the Difference Between Means for Finite Populations". The American Statistician . 30 (2): 96–97. doi:10.1080/00031305.1976.10479149. JSTOR   2683803. (Equation 2)
11. Bence, James R. (1995). "Analysis of Short Time Series: Correcting for Autocorrelation". Ecology . 76 (2): 628–639. doi:10.2307/1941218. JSTOR   1941218.