The **theory of statistics** provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics.^{ [1] }^{ [2] } The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.^{ [2] }^{ [3] }

- Scope
- Modelling
- Data collection
- Summarising data
- Interpreting data
- Applied statistical inference
- See also
- References
- Citations
- Sources
- Further reading
- External links

Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists of mathematical statistics, and is closely linked to probability theory, to utility theory, and to optimization.

Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used in applied statistics.

Statistical models describe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds:

- Sampling from a finite population
- Measuring observational error and refining procedures
- Studying statistical relations

Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.^{ [4] }

Statistical theory provides a guide to comparing methods of data collection, where the problem is to generate informative data using optimization and randomization while measuring and controlling for observational error.^{ [5] }^{ [6] }^{ [7] } Optimization of data collection reduces the cost of data while satisfying statistical goals,^{ [8] }^{ [9] } while randomization allows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of:

- Design of experiments to estimate treatment effects, to test hypotheses, and to optimize responses.
^{ [8] }^{ [10] }^{ [11] } - Survey sampling to describe populations
^{ [12] }^{ [13] }^{ [14] }

The task of summarising statistical data in conventional forms (also known as descriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include:

- Choosing summary statistics to describe a sample
- Summarising probability distributions of sample data while making limited assumptions about the form of distribution that may be met
- Summarising the relationships between different quantities measured on the same items with a sample

Besides the philosophy underlying statistical inference, statistical theory has the task of considering the types of questions that data analysts might want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are:

- Summarising populations in the form of a fitted distribution or probability density function
- Summarising the relationship between variables using some type of regression analysis
- Providing ways of predicting the outcome of a random quantity given other related variables
- Examining the possibility of reducing the number of variables being considered within a problem (the task of Dimension reduction)

When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc observational data is often worse than for planned randomized generation of data.

Statistical theory provides the basis for a number of data-analytic approaches that are common across scientific and social research. Interpreting data is done with one of the following approaches:

- Estimating parameters
- Providing a range of values instead of a point estimate
- Testing statistical hypotheses

Many of the standard methods for those approaches rely on certain statistical assumptions (made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range of robust statistical techniques that are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a given data set.

**Analysis of variance** (**ANOVA**) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means in a sample. ANOVA was developed by the statistician Ronald Fisher. The ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the *t*-test beyond two means.

**Bayesian probability** is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

**Statistics** is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

**Statistical inference** is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

In statistics, **survey sampling** describes the process of selecting a sample of elements from a target population to conduct a survey. The term "survey" may refer to many different types or techniques of observation. In survey sampling it most often involves a questionnaire used to measure the characteristics and/or attitudes of people. Different ways of contacting members of a sample once they have been selected is the subject of survey data collection. The purpose of sampling is to reduce the cost and/or the amount of work that it would take to survey the entire target population. A survey that measures the entire target population is called a census. A sample refers to a group or section of a population from which information is to be obtained

Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations almost always requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions.

A **statistical hypothesis** is a hypothesis that is testable on the basis of observed data modelled as the realised values taken by a collection of random variables. A set of data is modelled as being realised values of a collection of random variables having a joint probability distribution in some set of possible joint distributions. The hypothesis being tested is exactly that set of possible probability distributions. A **statistical hypothesis test** is a method of statistical inference. An alternative hypothesis is proposed for the probability distribution of the data, either explicitly or only informally. The comparison of the two models is deemed *statistically significant* if, according to a threshold probability—the significance level—the data would be unlikely to occur if the null hypothesis were true. A hypothesis test specifies which outcomes of a study may lead to a rejection of the null hypothesis at a pre-specified level of significance, while using a pre-chosen measure of deviation from that hypothesis. The pre-chosen level of significance is the maximal allowed "false positive rate". One wants to control the risk of incorrectly rejecting a true null hypothesis.

In statistics, **interval estimation** is the use of sample data to calculate an interval of possible values of an unknown population parameter; this is in contrast to point estimation, which gives a single value. Jerzy Neyman (1937) identified interval estimation as distinct from point estimation. In doing so, he recognized that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.

In statistics, an **outlier** is a data point that differs significantly from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set. An outlier can cause serious problems in statistical analyses.

**Nonparametric statistics** is the branch of statistics that is not based solely on parametrized families of probability distributions. Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are violated.

In inferential statistics, the **null hypothesis** is a default hypothesis that a quantity to be measured is zero (null). Typically, the quantity to be measured is the difference between two situations, for instance to try to determine if there is a positive proof that an effect has occurred or that samples derive from different batches.

**Mathematical statistics** is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

**Random assignment** or **random placement** is an experimental technique for assigning human participants or animal subjects to different groups in an experiment using randomization, such as by a chance procedure or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.

In statistics, **resampling** is any of a variety of methods for doing one of the following:

- Estimating the precision of sample statistics by using subsets of available data (
**jackknifing**) or drawing randomly with replacement from a set of data points (**bootstrapping**) - Exchanging labels on data points when performing significance tests
- Validating models by using random subsets

Statistics, in the modern sense of the word, began evolving in the 18th century in response to the novel needs of industrializing sovereign states. The evolution of statistics was, in particular, intimately connected with the development of European states following the peace of Westphalia (1648), and with the development of probability theory, which put statistics on a firm theoretical basis.

The **foundations of statistics** concern the epistemological debate in statistics over how one should conduct inductive inference from data. Among the issues considered in statistical inference are the question of Bayesian inference versus frequentist inference, the distinction between Fisher's "significance testing" and Neyman–Pearson "hypothesis testing", and whether the likelihood principle should be followed. Some of these issues have been debated for up to 200 years without resolution.

**Statistical proof** is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.

**Exact statistics**, such as that described in exact test, is a branch of statistics that was developed to provide more accurate results pertaining to statistical testing and interval estimation by eliminating procedures based on asymptotic and approximate statistical methods. The main characteristic of exact methods is that statistical tests and confidence intervals are based on exact probability statements that are valid for any sample size. Exact statistical methods help avoid some of the unreasonable assumptions of traditional statistical methods, such as the assumption of equal variances in classical ANOVA. They also allow exact inference on variance components of mixed models.

**Oscar Kempthorne** was a British statistician and geneticist known for his research on randomization-analysis and the design of experiments, which had wide influence on research in agriculture, genetics, and other areas of science.

- ↑ Cox & Hinkley (1974, p.1)
- 1 2 Rao, C. R. (1981). "Foreword". In Arthanari, T. S.; Dodge, Yadolah (eds.).
*Mathematical Programming in Statistics*. New York: John Wiley & Sons. pp. vii–viii. ISBN 0-471-08073-X. MR 0607328. - ↑ Lehmann & Romano (2005)
- ↑ Freedman (2009)
- ↑ Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation".
*Memoirs of the National Academy of Sciences*.**3**: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm - ↑ Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design".
*Isis*.**79**(3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489. - ↑ Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research".
*American Journal of Education*.**101**(1): 60–70. doi:10.1086/444032. - 1 2 Atkinson et al. (2007)
- ↑ Kiefer, Jack Carl (1985). Brown, Lawrence D.; Olkin, Ingram; Sacks, Jerome; et al. (eds.).
*Jack Carl Kiefer: Collected papers III—Design of experiments*. Springer-Verlag and the Institute of Mathematical Statistics. pp. 718+xxv. ISBN 0-387-96004-X. - ↑ Hinkelmann & Kempthorne (2008)
- ↑ Bailey (2008).
- ↑ Kish (1965)
- ↑ Cochran (1977)
- ↑ Särndal et al. (1992)

- Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007).
*Optimum Experimental Designs, with SAS*. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6. - Bailey, R. A (2008).
*Design of Comparative Experiments*. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line. - Cochran, William G. (1977).
*Sampling Techniques*(Third ed.). John Wiley & Sons. ISBN 0-471-16240-X. - Cox, D.R., Hinkley, D.V. (1974)
*Theoretical Statistics*, Chapman & Hall. ISBN 0-412-12420-3 - Freedman, David A. (2009).
*Statistical Models: Theory and Practice*(Second ed.). Cambridge University Press. ISBN 978-0-521-67105-7. - Hinkelmann, Klaus and Kempthorne, Oscar (2008).
*Design and Analysis of Experiments*. I, II (Second ed.). John Wiley & Sons. ISBN 978-0-470-38551-7.CS1 maint: multiple names: authors list (link) - Kish, L. (1965),
*Survey Sampling*, John Wiley & Sons. ISBN 0-471-48900-X - Lehmann, E. L.; Romano, J. P. (2005),
*Testing Statistical Hypotheses*(third ed.), Springer. - Särndal, Carl-Erik, Swensson, Bengt, and Wretman, Jan (1992).
*Model Assisted Survey Sampling*. Springer-Verlag. ISBN 0-387-40620-4.CS1 maint: multiple names: authors list (link)

- Peirce, C. S.
- (1876), "Note on the Theory of the Economy of Research" in
*Coast Survey Report*, pp. 197–201 (Appendix No. 14), NOAA PDF Eprint. Reprinted 1958 in*Collected Papers of Charles Sanders Peirce***7**, paragraphs 139–157 and in 1967 in*Operations Research***15**(4): pp. 643–648, Abstract from JSTOR. - (1967) Peirce, C. S. (1967). "Note on the Theory of the Economy of Research".
*Operations Research*.**15**(4): 643. doi:10.1287/opre.15.4.643. - (1877–1878), "Illustrations of the Logic of Science"
- (1883), "A Theory of Probable Inference"
- and Jastrow, Joseph (1885), "On Small Differences in Sensation" in
*Memoirs of the National Academy of Sciences***3**: pp. 73–83. Eprint.

- (1876), "Note on the Theory of the Economy of Research" in
- Bickel, Peter J. & Doksum, Kjell A. (2001).
*Mathematical Statistics: Basic and Selected Topics*.**I**(Second (updated printing 2007) ed.). Pearson Prentice-Hall. ISBN 0-13-850363-X. - Davison, A.C. (2003)
*Statistical Models*. Cambridge University Press. ISBN 0-521-77339-3 - Lehmann, Erich (1983).
*Theory of Point Estimation*. - Liese, Friedrich & Miescke, Klaus-J. (2008).
*Statistical Decision Theory: Estimation, Testing, and Selection*. Springer. ISBN 0-387-73193-8.

- Media related to Statistical theory at Wikimedia Commons

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.