Assumed mean

Last updated

In statistics the assumed mean is a method for calculating the arithmetic mean and standard deviation of a data set. It simplifies calculating accurate values by hand. Its interest today is chiefly historical but it can be used to quickly estimate these statistics. There are other rapid calculation methods which are more suited for computers which also ensure more accurate results than the obvious methods.

Contents

Example

First: The mean of the following numbers is sought:

219, 223, 226, 228, 231, 234, 235, 236, 240, 241, 244, 247, 249, 255, 262

Suppose we start with a plausible initial guess that the mean is about 240. Then the deviations from this "assumed" mean are the following:

21, 17, 14, 12, 9, 6, 5, 4, 0, 1, 4, 7, 9, 15, 22

In adding these up, one finds that:

22 and 21 almost cancel, leaving +1,
15 and 17 almost cancel, leaving 2,
9 and 9 cancel,
7 + 4 cancels 6 5,

and so on. We are left with a sum of 30. The average of these 15 deviations from the assumed mean is therefore 30/15 = 2. Therefore, that is what we need to add to the assumed mean to get the correct mean:

correct mean = 240 2 = 238.

Method

The method depends on estimating the mean and rounding to an easy value to calculate with. This value is then subtracted from all the sample values. When the samples are classed into equal size ranges a central class is chosen and the count of ranges from that is used in the calculations. For example, for people's heights a value of 1.75m might be used as the assumed mean.

For a data set with assumed mean x0 suppose:

Then

or for a sample standard deviation using Bessel's correction:

Example using class ranges

Where there are a large number of samples a quick reasonable estimate of the mean and standard deviation can be got by grouping the samples into classes using equal size ranges. This introduces a quantization error but is normally accurate enough for most purposes if 10 or more classes are used.

For instance with the exception,

167.8 175.4 176.1 166 174.7 170.2 178.9 180.4 174.6 174.5 182.4 173.4 167.4 170.7 180.6 169.6 176.2 176.3 175.1 178.7 167.2 180.2 180.3 164.7 167.9 179.6 164.9 173.2 180.3 168 175.5 172.9 182.2 166.7 172.4 181.9 175.9 176.8 179.6 166 171.5 180.6 175.5 173.2 178.8 168.3 170.3 174.2 168 172.6 163.3 172.5 163.4 165.9 178.2 174.6 174.3 170.5 169.7 176.2 175.1 177 173.5 173.6 174.3 174.4 171.1 173.3 164.6 173 177.9 166.5 159.6 170.5 174.7 182 172.7 175.9 171.5 167.1 176.9 181.7 170.7 177.5 170.9 178.1 174.3 173.3 169.2 178.2 179.4 187.6 186.4 178.1 174 177.1 163.3 178.1 179.1 175.6

The minimum and maximum are 159.6 and 187.6 we can group them as follows rounding the numbers down. The class size (CS) is 3. The assumed mean is the centre of the range from 174 to 177 which is 175.5. The differences are counted in classes.

Observed numbers in ranges
Rangetally-countfrequencyclass difffreq×difffreq×diff2
159—161/1−5−525
162—164//// /6−4−2496
165—167////////10−3−3090
168—170//////// ///13−2−2652
171—173//////////// /16−1−1616
174—176////////////////////25000
177—179//////////// /1611616
180—182//////// /1122244
183—1850300
186—188//24832
SumN = 100A = −55B = 371

The mean is then estimated to be

which is very close to the actual mean of 173.846.

The standard deviation is estimated as

Related Research Articles

In mathematics and statistics, the arithmetic mean or arithmetic average, or just the mean or the average, is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results of an experiment or an observational study, or frequently a set of results from a survey. The term "arithmetic mean" is preferred in some contexts in mathematics and statistics, because it helps distinguish it from other means, such as the geometric mean and the harmonic mean.

Median Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

Normal distribution Probability distribution

In statistics, a normal distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

Standard deviation In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Stratified sampling Sampling from a population which can be partitioned into subpopulations

In statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations.

Skewness Measure of the asymmetry of random variables

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

Variance Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

Log-normal distribution Probability distribution

In probability theory, a log-normaldistribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be k or more standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

Standard error Statistical property

The standard error (SE) of a statistic is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).

Bloch MB.170 1938 bomber aircraft family by Avion Marcel Bloch

The Bloch MB.170 and its derivatives were French reconnaissance bombers designed and built shortly before the Second World War. They were the best aircraft of this type available to the Armée de l'Air at the outbreak of the war, with speed, altitude and manoeuvrability that allowed them to evade interception by the German fighters. Although the aircraft could have been in service by 1937, debate over what role to give the aircraft delayed deliveries until 1940.

Moving average Type of statistical measure over subsets of a dataset

In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter. Variations include: simple, cumulative, or weighted forms.

In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation to the mean . The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R, by economists and investors in economic models, and in neuroscience.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

Squared deviations from the mean (SDM) are involved in various calculations. In probability theory and statistics, the definition of variance is either the expected value of the SDM or its average value. Computations for analysis of variance involve the partitioning of a sum of SDM.

In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.

Kuznyechik is a symmetric block cipher. It has a block size of 128 bits and key length of 256 bits. It is defined in the National Standard of the Russian Federation GOST R 34.12-2015 and also in RFC 7801.

References