Frequentist probability

Last updated

John Venn, who provided a thorough exposition of frequentist probability in his book, The Logic of Chance. John Venn.jpg
John Venn, who provided a thorough exposition of frequentist probability in his book, The Logic of Chance.

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in infinitely many trials (the long-run probability). [2] Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question. [3] [4] [5]

Contents

The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, for example, the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.

Definition

In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: It occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.

A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies.

Scope

The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.

As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.[ citation needed ]

As Feller notes: [lower-alpha 1]

There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines "out of infinitely many worlds one is selected at random ..." Little imagination is required to construct such a model, but it appears both uninteresting and meaningless. [6]

History

The frequentist view may have been foreshadowed by Aristotle, in Rhetoric , [7] when he wrote:

the probable is that which for the most part happens — Aristotle Rhetoric [8]

Poisson (1837) clearly distinguished between objective and subjective probabilities. [9] Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis (1843) [10] and Ellis (1854), [11] Cournot (1843), [12] and Fries introduced the frequentist view. Venn (1866, 1876, 1888) [1] provided a thorough exposition two decades later. These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences. [9] The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability.

Alternatively, [13] Bernoulli [lower-alpha 2] understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously (Bernoulli, 1713). [14] He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem). [15] [lower-alpha 3] [16] Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson. [13] Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation. Any criticism by Gauss or Laplace was muted and implicit. (However, note that their later derivations of least squares did not use inverse probability.)

Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman, and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population"; [17] Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist.

All were suspicious of "inverse probability" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, "... the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected." [18] While Neyman was a pure frequentist, [19] [lower-alpha 4] Fisher's views of probability were unique: Both Fisher and Neyman had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era. [20] [21]

Etymology

According to the Oxford English Dictionary , the term frequentist was first used by M.G. Kendall in 1949, to contrast with Bayesians, whom he called non-frequentists. [22] [23] Kendall observed

3. ... we may broadly distinguish two main attitudes. One takes probability as 'a degree of rational belief', or some similar idea...the second defines probability in terms of frequencies of occurrence of events, or by relative proportions in 'populations' or 'collectives'; [23] (p 101)
...
12. It might be thought that the differences between the frequentists and the non-frequentists (if I may call them such) are largely due to the differences of the domains which they purport to cover. [23] (p 104)
...
I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not. [emphasis in original]

"The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921). [7]

The historical sequence:

  1. Probability concepts were introduced and much of the mathematics of probability derived (prior to the 20th century)
  2. classical statistical inference methods were developed
  3. the mathematical foundations of probability were solidified and current terminology was introduced (all in the 20th century).

The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian), and frequentist probability.

Alternative views

Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability.

Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations; [24] All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book.

Footnotes

  1. Feller's comment is a criticism of Pierre-Simon Laplace's solution to the "tomorrow's sunrise" problem that used an alternative probability interpretation.
    Despite Laplace's explicit and immediate disclaimer in the source, based on Laplace's personal expertise in both astronomy and probability, two centuries of nattering criticism have followed.
  2. The Swiss mathematician Jacob Bernoulli of the famous Bernoulli family lived in a multi-lingual country and himself had regular correspondance and contacts with speakers of German and French, and published in Latin – all of which he spoke fluently. He comfortably and frequently used the three names "Jacob", "James", and "Jacques", depending on the language he was speaking or writing.
  3. Bernoulli provided a classical example of drawing many black and white pebbles from an urn (with replacement). The sample ratio allowed Bernoulli to infer the ratio in the urn, with tighter bounds as the number of samples increased.
    Historians can interpret the example as classical, frequentist, or subjective probability. David writes, "James has definitely started here the controversy on inverse probability ..." Bernoulli wrote generations before Bayes, LaPlace and Gauss. The controversy continues. — David (1962), pp.  137–138 [16]
  4. Jerzy Neyman's derivation of confidence intervals embraced the measure theoretic axioms of probability published by Andrey Kolmogorov a few years earlier, and referenced the 'subjective probability (Bayesian) definitions that Jeffreys had published earlier in the decade. Neyman defined frequentist probability (under the name classical) and stated the need for randomness in the repeated samples or trials. He accepted in principle the possibility of multiple competing theories of probability, while expressing several specific reservations about the existing alternative probability interpretation. [19]

Citations

  1. 1 2 Venn, John (1888) [1866, 1876]. The Logic of Chance (3rd ed.). London, UK: Macmillan & Co. via Internet Archive (archive.org. An essay on the foundations and province of the theory of probability, with especial reference to its logical bearings and its application to moral and social science, and to statistics.
  2. Kaplan, D. (2014). Bayesian Statistics for the Social Sciences. Methodology in the Social Sciences. Guilford Publications. p. 4. ISBN   978-1-4625-1667-4 . Retrieved 23 April 2022.
  3. Goodman, Steven N. (1999). "Toward evidence-based medical statistics. 1: The p value fallacy". Annals of Internal Medicine . 130 (12): 995–1004. doi:10.7326/0003-4819-130-12-199906150-00008. PMID   10383371. S2CID   7534212.
  4. Morey, Richard D.; Hoekstra, Rink; Rouder, Jeffrey N.; Lee, Michael D.; Wagenmakers, Eric-Jan (2016). "The fallacy of placing confidence in confidence intervals". Psychonomic Bulletin & Review. 23 (1): 103–123. doi:10.3758/s13423-015-0947-8. PMC   4742505 . PMID   26450628.
  5. Matthews, Robert (2021). "The p-value statement, five years on". Significance. 18 (2): 16–19. doi:10.1111/1740-9713.01505. S2CID   233534109.
  6. Feller, W. (1957). An Introduction to Probability Theory and Its Applications. Vol. 1. p. 4.
  7. 1 2 Keynes, J.M. (1921). "Chapter VIII – The frequency theory of probability". A Treatise on Probability.
  8. Aristotle. Rhetoric. Bk 1, Ch 2.
    discussed in
    Franklin, J. (2001). The Science of Conjecture: Evidence and probability before Pascal. Baltimore, MD: The Johns Hopkins University Press. p. 110. ISBN   0801865697.
  9. 1 2 Gigerenzer, Gerd; Swijtink, Porter; Daston, Beatty; Daston, Krüger (1989). The Empire of Chance : How probability changed science and everyday life. Cambridge, UK / New York, NY: Cambridge University Press. pp. 35–36, 45. ISBN   978-0-521-39838-1.
  10. Ellis, R.L. (1843). "On the foundations of the theory of probabilities". Transactions of the Cambridge Philosophical Society . 8.
  11. Ellis, R.L. (1854). "Remarks on the fundamental principles of the theory of probabilities". Transactions of the Cambridge Philosophical Society . 9.
  12. Cournot, A.A. (1843). Exposition de la théorie des chances et des probabilités. Paris, FR: L. Hachette via Internet Archive (archive.org).
  13. 1 2 Hald, Anders (2004). A history of Parametric Statistical Inference from Bernoulli to Fisher, 1713 to 1935. København, DM: Anders Hald, Department of Applied Mathematics and Statistics, University of Copenhagen. pp. 1–5. ISBN   978-87-7834-628-5.
  14. Bernoulli, Jakob (1713). Ars Conjectandi: Usum & applicationem praecedentis doctrinae in civilibus, moralibus, & oeconomicis[The Art of Conjecture: The use and application of previous experience in civil, moral, and economic topics] (in Latin).
  15. Fienberg, Stephen E. (1992). "A Brief History of Statistics in Three and One-half Chapters: A Review Essay". Statistical Science. 7 (2): 208–225. doi: 10.1214/ss/1177011360 .
  16. 1 2 David, F.N. (1962). Games, Gods, & Gambling. New York, NY: Hafner. pp. 137–138.
  17. Rubin, M. (2020). ""Repeated sampling from the same population?" A critique of Neyman and Pearson's responses to Fisher". European Journal for Philosophy of Science. 10 (42): 1–15. doi:10.1007/s13194-020-00309-6. S2CID   221939887.
  18. Fisher, R.A. Statistical Methods for Research Workers.
  19. 1 2 Neyman, Jerzy (30 August 1937). "Outline of a theory of statistical estimation based on the classical theory of probability". Philosophical Transactions of the Royal Society of London A. 236 (767): 333–380. Bibcode:1937RSPTA.236..333N. doi: 10.1098/rsta.1937.0005 .
  20. von Mises, Richard (1981) [1939]. Probability, Statistics, and Truth (in German and English) (2nd, rev. ed.). Dover Publications. p. 14. ISBN   0486242145.
  21. Gilles, Donald (2000). "Chapter 5 – The frequency theory". Philosophical Theories of Probability. Psychology Press. p. 88. ISBN   9780415182751.
  22. "Earliest known uses of some of the words of probability & statistics". leidenuniv.nl. Leidin, NL: Leiden University.
  23. 1 2 3 Kendall, M.G. (1949). "On the Reconciliation of Theories of Probability". Biometrika . 36 (1–2): 101–116. doi:10.1093/biomet/36.1-2.101. JSTOR   2332534. PMID   18132087.
  24. 1 2 Hájek, Alan (21 October 2002). "Interpretations of probability". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy via plato.stanford.edu.
  25. Ash, Robert B. (1970). Basic Probability Theory. New York, NY: Wiley. pp. 1–2.
  26. Fairfield, Tasha; Charman, Andrew E. (15 May 2017). "Explicit Bayesian analysis for process tracing: Guidelines, opportunities, and caveats". Political Analysis . 25 (3): 363–380. doi:10.1017/pan.2017.14. S2CID   8862619.

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

<span class="mw-page-title-main">Probability</span> Branch of mathematics concerning chance and uncertainty

Probability is the branch of mathematics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

The following outline is provided as an overview of and topical guide to statistics:

<span class="mw-page-title-main">Statistical hypothesis test</span> Method of statistical inference

A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently supports a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined.

<span class="mw-page-title-main">Confidence interval</span> Range to estimate an unknown parameter

Informally, in frequentist statistics, a confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated. More specifically, given a confidence level , a CI is a random interval which contains the parameter being estimated % of the time. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials. More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.

<span class="mw-page-title-main">Inverse probability</span> Old term for the probability distribution of an unobserved variable

In probability theory, inverse probability is an old term for the probability distribution of an unobserved variable.

The classical definition or interpretation of probability is identified with the works of Jacob Bernoulli and Pierre-Simon Laplace. As stated in Laplace's Théorie analytique des probabilités,

Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.

Statistics, in the modern sense of the word, began evolving in the 18th century in response to the novel needs of industrializing sovereign states.

The foundations of statistics are the mathematical and philosophical bases for statistical methods. These bases are theoretical frameworks that ground and justify methods of statistical inference, estimation, hypothesis testing, uncertainty quantification, and the interpretation of statistical conclusions. Further, a foundation can be used to explain statistical paradoxes, provide descriptions of statistical laws, and guide the application of statistics to real-world problems.

Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.

The following is a timeline of probability and statistics.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation, although it is a purely frequentist concept. A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.

Likelihoodist statistics or likelihoodism is an approach to statistics that exclusively or primarily uses the likelihood function. Likelihoodist statistics is a more minor school than the main approaches of Bayesian statistics and frequentist statistics, but has some adherents and applications. The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider data only as evidence, and do not use it as the basis of statistical inference, while others make inferences based on likelihood, but without using Bayesian inference or frequentist inference. Likelihoodism is thus criticized for either not providing a basis for belief or action, or not satisfying the requirements of these other schools.

References