Assumed mean

Last updated

In statistics, the assumed mean is a method for calculating the arithmetic mean and standard deviation of a data set. It simplifies calculating accurate values by hand. Its interest today is chiefly historical but it can be used to quickly estimate these statistics. There are other rapid calculation methods which are more suited for computers which also ensure more accurate results than the obvious methods.

Contents

Example

First: The mean of the following numbers is sought:

219, 223, 226, 228, 231, 234, 235, 236, 240, 241, 244, 247, 249, 255, 262

Suppose we start with a plausible initial guess that the mean is about 240. Then the deviations from this "assumed" mean are the following:

21, 17, 14, 12, 9, 6, 5, 4, 0, 1, 4, 7, 9, 15, 22

In adding these up, one finds that:

22 and 21 almost cancel, leaving +1,
15 and 17 almost cancel, leaving 2,
9 and 9 cancel,
7 + 4 cancels 6 5,

and so on. We are left with a sum of 30. The average of these 15 deviations from the assumed mean is therefore 30/15 = 2. Therefore, that is what we need to add to the assumed mean to get the correct mean:

correct mean = 240 2 = 238.

Method

The method depends on estimating the mean and rounding to an easy value to calculate with. This value is then subtracted from all the sample values. When the samples are classed into equal size ranges a central class is chosen and the count of ranges from that is used in the calculations. For example, for people's heights a value of 1.75m might be used as the assumed mean.

For a data set with assumed mean x0 suppose:

Then

or for a sample standard deviation using Bessel's correction:

Example using class ranges

Where there are a large number of samples a quick reasonable estimate of the mean and standard deviation can be got by grouping the samples into classes using equal size ranges. This introduces a quantization error but is normally accurate enough for most purposes if 10 or more classes are used.

For instance with the exception,

167.8 175.4 176.1 166 174.7 170.2 178.9 180.4 174.6 174.5 182.4 173.4 167.4 170.7 180.6 169.6 176.2 176.3 175.1 178.7 167.2 180.2 180.3 164.7 167.9 179.6 164.9 173.2 180.3 168 175.5 172.9 182.2 166.7 172.4 181.9 175.9 176.8 179.6 166 171.5 180.6 175.5 173.2 178.8 168.3 170.3 174.2 168 172.6 163.3 172.5 163.4 165.9 178.2 174.6 174.3 170.5 169.7 176.2 175.1 177 173.5 173.6 174.3 174.4 171.1 173.3 164.6 173 177.9 166.5 159.6 170.5 174.7 182 172.7 175.9 171.5 167.1 176.9 181.7 170.7 177.5 170.9 178.1 174.3 173.3 169.2 178.2 179.4 187.6 186.4 178.1 174 177.1 163.3 178.1 179.1 175.6

The minimum and maximum are 159.6 and 187.6 we can group them as follows rounding the numbers down. The class size (CS) is 3. The assumed mean is the centre of the range from 174 to 177 which is 175.5. The differences are counted in classes.

Observed numbers in ranges
Rangetally-countfrequencyclass difffreq×difffreq×diff2
159—161/1−5−525
162—164//// /6−4−2496
165—167////////10−3−3090
168—170//////// ///13−2−2652
171—173//////////// /16−1−1616
174—176////////////////////25000
177—179//////////// /1611616
180—182//////// /1122244
183—1850300
186—188//24832
SumN = 100A = −55B = 371

The mean is then estimated to be

which is very close to the actual mean of 173.846.

The standard deviation is estimated as

Related Research Articles

In mathematics and statistics, the arithmetic mean, arithmetic average, or just the mean or average is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic.

In probability theory and statistics, kurtosis refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtosis provides insight into specific characteristics of a distribution. Various methods exist for quantifying kurtosis in theoretical distributions, and corresponding techniques allow estimation based on sample data from a population. It’s important to note that different measures of kurtosis can yield varying interpretations.

<span class="mw-page-title-main">Median</span> Middle quantile of a data set or probability distribution

The median of a set of numbers is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as the “middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance in robust statistics.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

<span class="mw-page-title-main">Stratified sampling</span> Sampling from a population which can be partitioned into subpopulations

In statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations.

<span class="mw-page-title-main">Skewness</span> Measure of the asymmetry of random variables

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

<span class="mw-page-title-main">Signal-to-noise ratio</span> Ratio of the desired signal to the background noise

Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.

In mathematics, the root mean square of a set of numbers is the square root of the set's mean square. Given a set , its RMS is denoted as either or . The RMS is also known as the quadratic mean, a special case of the generalized mean. The RMS of a continuous function is denoted and can be defined in terms of an integral of the square of the function.

<span class="mw-page-title-main">Log-normal distribution</span> Probability distribution

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).

In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes are a complement tool for statistical hypothesis testing, and play an important role in power analyses to assess the sample size required for new experiments. Effect size are fundamental in meta-analyses which aim to provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

<span class="mw-page-title-main">Standard error</span> Statistical property

The standard error (SE) of a statistic is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The standard error is a key ingredient in producing confidence intervals.

In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R, by economists and investors in economic models, and in psychology/neuroscience.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares.

Kuznyechik is a symmetric block cipher. It has a block size of 128 bits and key length of 256 bits. It is defined in the National Standard of the Russian Federation GOST R 34.12-2015 and also in RFC 7801.

References