History of probability

Last updated

Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically older in, for example, the law of evidence, while the mathematical treatment of dice began with the work of Cardano, Pascal, Fermat and Christiaan Huygens between the 16th and 17th century.

Contents

Probability deals with random experiments with a known distribution, Statistics deals with inference from the data about the unknown distribution.

Etymology

Probable and probability and their cognates in other modern languages derive from medieval learned Latin probabilis, deriving from Cicero and generally applied to an opinion to mean plausible or generally approved. [1] The form probability is from Old French probabilite (14 c.) and directly from Latin probabilitatem (nominative probabilitas) "credibility, probability," from probabilis (see probable). The mathematical sense of the term is from 1718. In the 18th century, the term chance was also used in the mathematical sense of "probability" (and probability theory was called Doctrine of Chances). This word is ultimately from Latin cadentia, i.e. "a fall, case". The English adjective likely is of Germanic origin, most likely from Old Norse likligr (Old English had geliclic with the same sense), originally meaning "having the appearance of being strong or able" "having the similar appearance or qualities", with a meaning of "probably" recorded mid-15 c. The derived noun likelihood had a meaning of "similarity, resemblance" but took on a meaning of "probability" from the mid 15th century. The meaning "something likely to be true" is from 1570s.

Origins

Ancient and medieval law of evidence developed a grading of degrees of proof, credibility, presumptions and half-proof to deal with the uncertainties of evidence in court. [2]

In Renaissance times, betting was discussed in terms of odds such as "ten to one" and maritime insurance premiums were estimated based on intuitive risks, but there was no theory on how to calculate such odds or premiums. [3]

The mathematical methods of probability arose in the investigations first of Gerolamo Cardano in the 1560s (not published until 100 years later), and then in the correspondence Pierre de Fermat and Blaise Pascal (1654) on such questions as the fair division of the stake in an interrupted game of chance. Christiaan Huygens (1657) gave a comprehensive treatment of the subject. [4] [5]

In ancient times there were games played using astragali, or talus bone. [6] The pottery of ancient Greece provides evidence to show that the astragali were tossed into a circle drawn on the floor, much like playing marbles. In Egypt, excavators of tombs found a game they called "Hounds and Jackals", which closely resembles the modern game snakes and ladders. According to Pausanias, [7] Palamedes invented dice during the Trojan wars, although their true origin is uncertain. The first dice game mentioned in literature of the Christian era was called hazard. Played with two or three dice, it was probably brought to Europe by the knights returning from the Crusades. Dante Alighieri (1265–1321) mentions this game. A commenter of Dante puts further thought into this game: the thought was that with three dice, the lowest number you can get is three, an ace for every die. Achieving a four can be done with three dice by having a two on one die and aces on the other two dice. [8]

Cardano also thought about the sum of three dice. At face value there are the same number of combinations that sum to 9 as those that sum to 10. For a 9:(621) (531) (522) (441) (432) (333) and for 10: (631) (622) (541) (532) (442) (433). However, there are more ways of obtaining some of these combinations than others. For example, if we consider the order of results there are six ways to obtain (621): (1,2,6), (1,6,2), (2,1,6), (2,6,1), (6,1,2), (6,2,1), but there is only one way to obtain (333), where the first, second and third dice all roll 3. There are a total of 27 permutations that sum to 10 but only 25 that sum to 9. From this, Cardano found that the probability of throwing a 9 is less than that of throwing a 10. He also demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes). [9] [10]

In addition, Galileo wrote about die-throwing sometime between 1613 and 1623. Unknowingly considering what is essentially the same problem as Cardano's, Galileo had said that certain numbers have the ability to be thrown because there are more ways to create that number. [11]

Eighteenth century

Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham De Moivre's The Doctrine of Chances (1718) put probability on a sound mathematical footing, showing how to calculate a wide range of complex probabilities. Bernoulli proved a version of the fundamental law of large numbers, which states that in a large number of trials, the average of the outcomes is likely to be very close to the expected value - for example, in 1000 throws of a fair coin, it is likely that there are close to 500 heads (and the larger the number of throws, the closer to half-and-half the proportion is likely to be).

Nineteenth century

The power of probabilistic methods in dealing with uncertainty was shown by Gauss's determination of the orbit of Ceres from a few observations. The theory of errors used the method of least squares to correct error-prone observations, especially in astronomy, based on the assumption of a normal distribution of errors to determine the most likely true value. In 1812, Laplace issued his Théorie analytique des probabilités in which he consolidated and laid down many fundamental results in probability and statistics such as the moment-generating function, method of least squares, inductive probability, and hypothesis testing.

Towards the end of the nineteenth century, a major success of explanation in terms of probabilities was the statistical mechanics of Ludwig Boltzmann and J. Willard Gibbs which explained properties of gases such as temperature in terms of the random motions of large numbers of particles.

The field of the history of probability itself was established by Isaac Todhunter's monumental A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace (1865).

Twentieth century

Probability and statistics became closely connected through the work on hypothesis testing of R. A. Fisher and Jerzy Neyman, which is now widely applied in biological and psychological experiments and in clinical trials of drugs, as well as in economics and elsewhere. A hypothesis, for example that a drug is usually effective, gives rise to a probability distribution that would be observed if the hypothesis is true. If observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected. [12]

The theory of stochastic processes broadened into such areas as Markov processes and Brownian motion, the random movement of tiny particles suspended in a fluid. That provided a model for the study of random fluctuations in stock markets, leading to the use of sophisticated probability models in mathematical finance, including such successes as the widely used Black–Scholes formula for the valuation of options. [13]

The twentieth century also saw long-running disputes on the interpretations of probability. In the mid-century frequentism was dominant, holding that probability means long-run relative frequency in a large number of trials. At the end of the century there was some revival of the Bayesian view, according to which the fundamental notion of probability is how well a proposition is supported by the evidence for it.

The mathematical treatment of probabilities, especially when there are infinitely many possible outcomes, was facilitated by Kolmogorov's axioms (1933).

Related Research Articles

<span class="mw-page-title-main">Blaise Pascal</span> French mathematician, physicist, inventor, writer, and Christian philosopher (1623–1662)

Blaise Pascal was a French mathematician, physicist, inventor, philosopher, and Catholic writer.

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Frequentist probability</span> Interpretation of probability

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in infinitely many trials . Probabilities can be found by a repeatable objective process. The continued use of frequentist methods in scientific inference, however, has been called into question.

<span class="mw-page-title-main">Probability</span> Branch of mathematics concerning chance and uncertainty

Probability is the branch of mathematics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2.

<span class="mw-page-title-main">Sample space</span> Set of all possible outcomes or results of a statistical trial or experiment

In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points, are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U. The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Statistical hypothesis test</span> Method of statistical inference

A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently supports a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined.

<span class="mw-page-title-main">Hypothetico-deductive model</span> Proposed description of the scientific method

The hypothetico-deductive model or method is a proposed description of the scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that can be falsifiable, using a test on observable data where the outcome is not yet known. A test outcome that could have and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test outcome that could have, but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.

In statistics, gambler's ruin is the fact that a gambler playing a game with negative expected value will eventually go bankrupt, regardless of their betting system.

<span class="mw-page-title-main">Classical definition of probability</span> Concept in probability theory

The classical definition or interpretation of probability is identified with the works of Jacob Bernoulli and Pierre-Simon Laplace. As stated in Laplace's Théorie analytique des probabilités,

Antoine Gombaud, alias Chevalier de Méré, was a French writer, born in Poitou. Although he was not a nobleman, he adopted the title chevalier (knight) for the character in his dialogues who represented his own views. Later his friends began calling him by that name.

<span class="mw-page-title-main">Pierre de Fermat</span> French mathematician and lawyer (1607–1665)

Pierre de Fermat was a French mathematician who is given credit for early developments that led to infinitesimal calculus, including his technique of adequality. In particular, he is recognized for his discovery of an original method of finding the greatest and the smallest ordinates of curved lines, which is analogous to that of differential calculus, then unknown, and his research into number theory. He made notable contributions to analytic geometry, probability, and optics. He is best known for his Fermat's principle for light propagation and his Fermat's Last Theorem in number theory, which he described in a note at the margin of a copy of Diophantus' Arithmetica. He was also a lawyer at the Parlement of Toulouse, France.

The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory. One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value.

Statistics, in the modern sense of the word, began evolving in the 18th century in response to the novel needs of industrializing sovereign states.

<i>Ars Conjectandi</i> 1713 book on probability and combinatorics by Jacob Bernoulli

Ars Conjectandi is a book on combinatorics and mathematical probability written by Jacob Bernoulli and published in 1713, eight years after his death, by his nephew, Niklaus Bernoulli. The seminal work consolidated, apart from many combinatorial topics, many central ideas in probability theory, such as the very first version of the law of large numbers: indeed, it is widely regarded as the founding work of that subject. It also addressed problems that today are classified in the twelvefold way and added to the subjects; consequently, it has been dubbed an important historical landmark in not only probability but all combinatorics by a plethora of mathematical historians. The importance of this early work had a large impact on both contemporary and later mathematicians; for example, Abraham de Moivre.

<span class="mw-page-title-main">Randomness</span> Apparent lack of pattern or predictability in events

In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a known probability distribution, the frequency of different outcomes over repeated events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.

The following is a timeline of probability and statistics.

<span class="mw-page-title-main">History of randomness</span>

In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. At the same time, most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.

Science and technology in France has a long history dating back to the Académie des Sciences, founded by Louis XIV in 1666, at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. France's achievements in science and technology have been significant throughout the past centuries as France's economic growth and industrialisation process was slow and steady along the 18th and 19th centuries. Research and development efforts form an integral part of the country's economy.

References

  1. Franklin (2001), pp. 113, 126.
  2. Franklin (2001).
  3. Franklin (2001), pp. 278–288.
  4. Hacking (2006). For Cardano, see p. 54; for Fermat and Pascal, see pp. 59–61; for Huygens, see pp. 92–94
  5. Franklin (2001), pp. 296–316.
  6. David, F. N. (1969). Games, gods and gambling: the origins and history of probability and statistical ideas from the earliest times to the Newtonian era. London: Griffin. ISBN   978-0-85264-171-2.
  7. Pausanias (2007). Description of Greece. 1: Books I and II. The Loeb classical library (Repr. ed.). Cambridge, Mass.: Harvard Univ. Press. ISBN   978-0-674-99104-0.
  8. Franklin (2001), pp. 293–294.
  9. Some laws and problems in classical probability and how Cardano anticipated them Gorrochum, P. Chance magazine 2012
  10. Franklin (2001), pp. 296–300.
  11. Franklin (2001), p. 302.
  12. Salsburg (2001).
  13. Bernstein (1996), Chapter 18.

Sources