Statistical inference

Last updated

Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution. [1] Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

Contents

Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.

Introduction

Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model.[ citation needed ]

Konishi & Kitagawa state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". [2] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". [3]

The conclusion of a statistical inference is a statistical proposition. [4] Some common forms of statistical proposition are the following:

Models and assumptions

Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. [5] Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn. [6]

Degree of models/assumptions

Statisticians distinguish between three levels of modeling assumptions;

• Fully parametric : The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. [5] For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models.
• Non-parametric : The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. [7] For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling.
• Semi-parametric : This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions.

Importance of valid models/assumptions

Whatever level of assumption is made, correctly calibrated inference in general requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.

Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. [8] More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. [9] Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. [10] The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." [11] In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." [11] Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy tailed.

Approximate distributions

Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.

With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. [12] Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. [12] Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance. [13] [14] [15]

With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution, if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. [16] [17] [18] However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. [19] The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families).

Randomization-based models

For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. [20] [21] Statistical inference from randomized studies is also more straightforward than many other situations. [22] [23] [24] In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information. [25]

Objective randomization allows properly inductive procedures. [26] [27] [28] [29] Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. [30] (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences. [31] [32] ) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. [33] However, a good observational study may be better than a bad randomized experiment.

The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model. [34] [35]

However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.

Model-based analysis of randomized experiments

It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. [36] However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. [21] Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units. [37]

Model-free randomization inference

Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations. [36] [38]

For example, model-free simple linear regression is based either on

• a random design, where the pairs of observations ${\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\cdots ,(X_{n},Y_{n})}$ are independent and identically distributed (iid), or
• a deterministic design, where the variables ${\displaystyle X_{1},X_{2},\cdots ,X_{n}}$ are deterministic, but the corresponding response variables ${\displaystyle Y_{1},Y_{2},\cdots ,Y_{n}}$ are random and independent with a common conditional distribution, i.e., ${\displaystyle P\left(Y_{j}\leq y|X_{j}=x\right)=D_{x}(y)}$, which is independent of the index ${\displaystyle j}$.

In either case, the model-free randomization inference for features of the common conditional distribution ${\displaystyle D_{x}(.)}$ relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, ${\displaystyle \mu (x)=E(Y|X=x)}$, can be consistently estimated via local averaging or local polynomial fitting, under the assumption that ${\displaystyle \mu (x)}$ is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean, ${\displaystyle \mu (x)}$. [39]

Different schools of statistical inference have become established. These schoolsor "paradigms"are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.

Bandyopadhyay & Forster [40] describe four paradigms: "(i) classical statistics or error statistics, (ii) Bayesian statistics, (iii) likelihood-based statistics, and (iv) the Akaikean-Information Criterion-based statistics". The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the AIC-based paradigm are summarized below.

Frequentist inference

This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.

Frequentist inference, objectivity, and decision theory

One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman [41] develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.

The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions.[ citation needed ] In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. [42] However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss.

While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'. [43]

Bayesian inference

The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate to one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach.

Bayesian inference, subjectivity and decision theory

Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.)

Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs.

Likelihood-based inference

Likelihoodism approaches statistics by using the likelihood function. Some likelihoodists reject inference, considering statistics as only computing support from evidence. Others, however, propose inference based on the likelihood function, of which the best-known is maximum likelihood estimation.

AIC-based inference

The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.

AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.)

Minimum description length

The minimum description length (MDL) principle has been developed from ideas in information theory [44] and the theory of Kolmogorov complexity. [45] The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.

However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. [46] In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. [46] [47]

The MDL principle has been applied in communication-coding theory in information theory, in linear regression, [47] and in data mining. [45]

The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory. [48]

Fiducial inference

Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. [49] [50] However this argument is the same as that which shows [51] that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using Upper and lower probabilities. [52]

Structural inference

Developing ideas of Fisher and of Pitman from 1938 to 1939, [53] George A. Barnard developed "structural inference" or "pivotal inference", [54] an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.

Inference topics

The topics below are usually included in the area of statistical inference.

History

Al-Kindi, an Arab mathematician in the 9th century, made the earliest known use of statistical inference in his Manuscript on Deciphering Cryptographic Messages, a work on cryptanalysis and frequency analysis. [55]

Notes

1. According to Peirce, acceptance means that inquiry on this question ceases for the time being. In science, all scientific theories are revisable.

Related Research Articles

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. This interpretation supports the statistical needs of many experimental scientists and pollsters. It does not support all needs, however; gamblers typically require estimates of the odds without experiments.

Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.

Statistics is a field of inquiry that studies the collection, analysis, interpretation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities; it is also used and misused for making informed decisions in all areas of business and government.

In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

In statistics, interval estimation is the use of sample data to calculate an interval of possible values of an unknown population parameter; this is in contrast to point estimation, which gives a single value. Jerzy Neyman (1937) identified interval estimation as distinct from point estimation. In doing so, he recognized that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.

In statistics, a confidence interval (CI) is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter. The interval has an associated confidence level, or coverage that, loosely speaking, quantifies the level of confidence that the deterministic parameter is captured by the interval. More strictly speaking, the confidence level represents the frequency of possible confidence intervals that contain the true value of the unknown population parameter. In other words, if confidence intervals are constructed using a given confidence level from an infinite number of independent sample statistics, the proportion of those intervals that contain the true value of the parameter will be equal to the confidence level.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics, although they differ on a philosophical basis: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.

In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. Bootstrapping allows assigning measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. Generally, it falls in the broader class of resampling methods.

Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.

The history of statistics in the modern way is that it originates from the term statistics, found in 1749 in Germany. Although there have been changes to the interpretation of the word over time. The development of statistics is intimately connected on the one hand with the development of sovereign states, particularly European states following the peace of Westphalia (1648); and the other hand with the development of probability theory, which put statistics on a firm theoretical basis.

The foundations of statistics concern the epistemological debate in statistics over how one should conduct inductive inference from data. Among the issues considered in statistical inference are the question of Bayesian inference versus frequentist inference, the distinction between Fisher's "significance testing" and Neyman–Pearson "hypothesis testing", and whether the likelihood principle should be followed. Some of these issues have been debated for up to 200 years without resolution.

Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data. An alternative name is frequentist statistics. This is the inference framework in which the well-established methodologies of statistical hypothesis testing and confidence intervals are based. Other than frequentistic inference, the main alternative approach to statistical inference is Bayesian inference, while another is fiducial inference.

Exact statistics, such as that described in exact test, is a branch of statistics that was developed to provide more accurate results pertaining to statistical testing and interval estimation by eliminating procedures based on asymptotic and approximate statistical methods. The main characteristic of exact methods is that statistical tests and confidence intervals are based on exact probability statements that are valid for any sample size. Exact statistical methods help avoid some of the unreasonable assumptions of traditional statistical methods, such as the assumption of equal variances in classical ANOVA. They also allow exact inference on variance components of mixed models.

David Amiel Freedman was Professor of Statistics at the University of California, Berkeley. He was a distinguished mathematical statistician whose wide-ranging research included the analysis of martingale inequalities, Markov processes, de Finetti's theorem, consistency of Bayes estimators, sampling, the bootstrap, and procedures for testing and evaluating models. He published extensively on methods for causal inference and the behavior of standard statistical models under non-standard conditions – for example, how regression models behave when fitted to data from randomized experiments. Freedman also wrote widely on the application—and misapplication—of statistics in the social sciences, including epidemiology, public policy, and law.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation, although it is a purely frequentist concept. A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.

References

Citations

1. Upton, G., Cook, I. (2008) Oxford Dictionary of Statistics, OUP. ISBN   978-0-19-954145-4.
2. Konishi & Kitagawa (2008), p. 75.
3. Cox (2006), p. 197.
4. "Statistical inference - Encyclopedia of Mathematics". www.encyclopediaofmath.org. Retrieved 2019-01-23.
5. Cox (2006) page 2
6. Evans, Michael; et al. (2004). Probability and Statistics: The Science of Uncertainty. Freeman and Company. p. 267. ISBN   9780716747420.
7. van der Vaart, A.W. (1998) Asymptotic Statistics Cambridge University Press. ISBN   0-521-78450-6 (page 341)
8. Kruskal 1988
9. Freedman, D.A. (2008) "Survival analysis: An Epidemiological hazard?". The American Statistician (2008) 62: 110-119. (Reprinted as Chapter 11 (pages 169–192) of Freedman (2010)).
10. Berk, R. (2003) Regression Analysis: A Constructive Critique (Advanced Quantitative Techniques in the Social Sciences) (v. 11) Sage Publications. ISBN   0-7619-2904-5
11. Brewer, Ken (2002). Combined Survey Sampling Inference: Weighing of Basu's Elephants. Hodder Arnold. p. 6. ISBN   978-0340692295.
12. Jörgen Hoffman-Jörgensen's Probability With a View Towards Statistics, Volume I. Page 399 [ full citation needed ]
13. Le Cam (1986) [ page needed ]
14. Erik Torgerson (1991) Comparison of Statistical Experiments, volume 36 of Encyclopedia of Mathematics. Cambridge University Press. [ full citation needed ]
15. Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer. ISBN   978-0-387-73193-3.
16. Kolmogorov (1963, p.369): "The frequency concept, based on the notion of limiting frequency as the number of trials increases to infinity, does not contribute anything to substantiate the applicability of the results of probability theory to real practical problems where we have always to deal with a finite number of trials".
17. "Indeed, limit theorems 'as ${\displaystyle n}$ tends to infinity' are logically devoid of content about what happens at any particular ${\displaystyle n}$. All they can do is suggest certain approaches whose performance must then be checked on the case at hand." Le Cam (1986) (page xiv)
18. Pfanzagl (1994): "The crucial drawback of asymptotic theory: What we expect from asymptotic theory are results which hold approximately . . . . What asymptotic theory has to offer are limit theorems."(page ix) "What counts for applications are approximations, not limits." (page 188)
19. Pfanzagl (1994) : "By taking a limit theorem as being approximately true for large sample sizes, we commit an error the size of which is unknown. [. . .] Realistic information about the remaining errors may be obtained by simulations." (page ix)
20. Neyman, J.(1934) "On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection", Journal of the Royal Statistical Society , 97 (4), 557625 JSTOR   2342192
21. Hinkelmann and Kempthorne(2008) [ page needed ]
22. ASA Guidelines for a first course in statistics for non-statisticians. (available at the ASA website)
23. David A. Freedman et alia's Statistics.
24. Moore et al. (2015).
25. Gelman A. et al. (2013). Bayesian Data Analysis (Chapman & Hall).
26. Peirce (1877-1878)
27. Peirce (1883)
28. David Freedman et alia Statistics and David A. Freedman Statistical Models.
29. Rao, C.R. (1997) Statistics and Truth: Putting Chance to Work, World Scientific. ISBN   981-02-3111-3
30. Peirce; Freedman; Moore et al. (2015).[ citation needed ]
31. Box, G.E.P. and Friends (2006) Improving Almost Anything: Ideas and Essays, Revised Edition, Wiley. ISBN   978-0-471-72755-2
32. Cox (2006), p. 196.
33. ASA Guidelines for a first course in statistics for non-statisticians. (available at the ASA website)
• David A. Freedman et alia's Statistics.
• Moore et al. (2015).
34. Neyman, Jerzy. 1923 [1990]. "On the Application of Probability Theory to AgriculturalExperiments. Essay on Principles. Section 9." Statistical Science 5 (4): 465–472. Trans. Dorota M. Dabrowska and Terence P. Speed.
35. Hinkelmann & Kempthorne (2008) [ page needed ]
36. Dinov, Ivo; Palanimalai, Selvam; Khare, Ashwini; Christou, Nicolas (2018). "Randomization‐based statistical inference: A resampling and simulation infrastructure". Teaching Statistics. 40 (2): 64–73. doi:10.1111/test.12156. PMC  .
37. Hinkelmann and Kempthorne (2008) Chapter 6.
38. Tang, Ming; Gao, Chao; Goutman, Stephen; Kalinin, Alexandr; Mukherjee, Bhramar; Guan, Yuanfang; Dinov, Ivo (2019). "Model-Based and Model-Free Techniques for Amyotrophic Lateral Sclerosis Diagnostic Prediction and Patient Clustering". Neuroinformatics. 17 (3): 407–421. doi:10.1007/s12021-018-9406-9.
39. Politis, D.N. (2019). "Model-free inference in statistics: how and why". IMS Bulletin. 48.
41. Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society of London A. 236 (767): 333–380. doi:10.1098/rsta.1937.0005. JSTOR   91337.
42. Preface to Pfanzagl.
43. Little, Roderick J. (2006). "Calibrated Bayes: A Bayes/Frequentist Roadmap". The American Statistician. 60 (3): 213–223. doi:10.1198/000313006X117837. ISSN   0003-1305. JSTOR   27643780.
44. Soofi (2000)
45. Hansen & Yu (2001)
46. Hansen and Yu (2001), page 747.
47. Rissanen (1989), page 84
48. Joseph F. Traub, G. W. Wasilkowski, and H. Wozniakowski. (1988) [ page needed ]
49. Neyman (1956)
50. Zabell (1992)
51. Cox (2006) page 66
52. Davison, page 12. [ full citation needed ]
53. Barnard, G.A. (1995) "Pivotal Models and the Fiducial Argument", International Statistical Review, 63 (3), 309–323. JSTOR   1403482
54. Broemeling, Lyle D. (1 November 2011). "An Account of Early Statistical Inference in Arab Cryptology". The American Statistician. 65 (4): 255–257. doi:10.1198/tas.2011.10191.