Probability

Last updated

The probabilities of rolling several numbers using two dice Dice Distribution (bar).svg
The probabilities of rolling several numbers using two dice

Probability is the branch of mathematics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. [note 1] [1] [2] The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).

Contents

These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems. [3]

Interpretations

When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:

Etymology

The word probability derives from the Latin probabilitas, which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. [10]

History

The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues [note 2] are still obscured by superstitions. [11]

According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." [12] However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. [13]

Gerolamo Cardano (16th century) Cardano.jpg
Gerolamo Cardano (16th century)
Christiaan Huygens published one of the first books on probability (17th century). Christiaan Huygens-painting.jpeg
Christiaan Huygens published one of the first books on probability (17th century).

The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes [14] ). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. [15] Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. [16] See Ian Hacking's The Emergence of Probability [10] and James Franklin's The Science of Conjecture [17] for histories of the early development of the very concept of mathematical probability.

The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. [18] The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.

The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. [19] The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old." [19]

Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.

Carl Friedrich Gauss Bendixen - Carl Friedrich Gauss, 1828.jpg
Carl Friedrich Gauss

Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). [20] In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,

where is a constant depending on precision of observation, and is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850).[ citation needed ] Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula[ clarification needed ] for r, the probable error of a single observation, is well known.

In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.

In 1906, Andrey Markov introduced [21] the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931. [22]

On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. [23] See integral geometry for more information.

Theory

Like other theories, the theory of probability is a representation of its concepts in formal terms that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability.

Applications

Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.

An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. [24]

In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). [25] As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play. [26]

Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty. [27]

The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.

Mathematical treatment

Calculation of probability (risk) vs odds Probability vs odds.svg
Calculation of probability (risk) vs odds

Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as . The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.

A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. [28]

The probability of an event A is written as , [29] , or . [30] This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.

The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as , , or ; its probability is given by P(not A) = 1 − P(A). [31] As an example, the chance of not rolling a six on a six-sided die is 1 – (chance of rolling a six) =1 − 1/6 = 5/6. For a more comprehensive treatment, see Complementary event.

If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as

Independent events

If two events, A and B are independent then the joint probability is [29]

Events A and B depicted as independent vs non-independent in space O. Independent and Non-independent Probability Events.jpg
Events A and B depicted as independent vs non-independent in space Ω.

For example, if two coins are flipped, then the chance of both being heads is [32]

Mutually exclusive events

If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.

If two events are mutually exclusive, then the probability of both occurring is denoted as and

If two events are mutually exclusive, then the probability of either occurring is denoted as and

For example, the chance of rolling a 1 or 2 on a six-sided die is

Not (necessarily) mutually exclusive events

If the events are not (necessarily) mutually exclusive then

Rewritten,

For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.

This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:

It can be seen, then, that this pattern can be repeated for any number of events.

Conditional probability

Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written , and is read "the probability of A, given B". It is defined by [33]

If then is formally undefined by this expression. In this case and are independent, since However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable). [34]

For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be

Inverse probability

In probability theory and applications, Bayes' rule relates the odds of event to event before (prior to) and after (posterior to) conditioning on another event The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). See Inverse probability and Bayes' rule.

Summary of probabilities

Summary of probabilities
EventProbability
A
not A
A or B
A and B
A given B

Relation to randomness and probability in quantum mechanics

In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant 6.02×1023) that only a statistical description of its properties is feasible. [35]

Probability theory is required to describe quantum phenomena. [36] A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". [37] Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. [38] In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.

See also

Notes

  1. Strictly speaking, a probability of 0 indicates that an event almost never takes place, whereas a probability of 1 indicates than an event almost certainly takes place. This is an important distinction when the sample space is infinite. For example, for the continuous uniform distribution on the real interval [5, 10], there are an infinite number of possible outcomes, and the probability of any given outcome being observed — for instance, exactly 7 — is 0. This means that when we make an observation, it will almost surely not be exactly 7. However, it does not mean that exactly 7 is impossible. Ultimately some specific outcome (with probability 0) will be observed, and one possibility for that specific outcome is exactly 7.
  2. In the context of the book that this is quoted from, it is the theory of probability and the logic behind it that governs the phenomena of such things compared to rash predictions that rely on pure luck or mythological arguments such as gods of luck helping the winner of the game.

Related Research Articles

<span class="mw-page-title-main">Binomial distribution</span> Probability distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

In number theory, two integers a and b are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides a does not divide b, and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also ais prime tob or ais coprime withb.

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Probability axioms</span> Foundations of probability theory

The standard probability axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. These axioms remain central and have direct contributions to mathematics, the physical sciences, and real-world probability cases.

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' can be misleading as its mathematical definition is not actually random nor a variable, but rather it is a function from possible outcomes in a sample space to a measurable space, often to the real numbers.

<span class="mw-page-title-main">Independence (probability theory)</span> When the occurrence of one event does not affect the likelihood of another

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

<span class="mw-page-title-main">Probability space</span> Mathematical concept

In probability theory, a probability space or a probability triple is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a die.

In probability theory and statistics, Bayes' theorem, named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.

<span class="mw-page-title-main">Bernoulli trial</span> Any experiment with two possible random outcomes

In the theory of probability and statistics, a Bernoulli trial is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713).

<span class="mw-page-title-main">Hardy–Weinberg principle</span> Principle in genetics

In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that allele and genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences. These influences include genetic drift, mate choice, assortative mating, natural selection, sexual selection, mutation, gene flow, meiotic drive, genetic hitchhiking, population bottleneck, founder effect,inbreeding and outbreeding depression.

<span class="mw-page-title-main">Pierre-Simon Laplace</span> French polymath (1749–1827)

Pierre-Simon, Marquis de Laplace was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized and extended the work of his predecessors in his five-volume Mécanique céleste (1799–1825). This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace.

In probability theory, an event is said to happen almost surely if it happens with probability 1. In other words, the set of outcomes on which the event does not occur has probability 0, even though the set might not be empty. The concept is analogous to the concept of "almost everywhere" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely ; however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0.

In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative way of expressing probability, much like odds or log-odds, but which has particular mathematical advantages in the setting of information theory.

<span class="mw-page-title-main">Laplace distribution</span> Probability distribution

In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

Lottery mathematics is used to calculate probabilities of winning or losing a lottery game. It is based primarily on combinatorics, particularly the twelvefold way and combinations without replacement.

<span class="mw-page-title-main">Conditional probability</span> Probability of an event occurring, given that another event has already occurred

In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A|B) or occasionally PB(A). This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): .

The Datar–Mathews Method is a method for real options valuation. The method provides an easy way to determine the real option value of a project simply by using the average of positive outcomes for the project. The method can be understood as an extension of the net present value (NPV) multi-scenario Monte Carlo model with an adjustment for risk aversion and economic decision-making. The method uses information that arises naturally in a standard discounted cash flow (DCF), or NPV, project financial valuation. It was created in 2000 by Vinay Datar, professor at Seattle University; and Scott H. Mathews, Technical Fellow at The Boeing Company.

<span class="mw-page-title-main">Conditional dependence</span>

In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if and are two events that individually increase the probability of a third event and do not directly affect each other, then initially

References

  1. "Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th ed., (2009), ISBN   978-0-534-24312-8.
  2. William Feller, An Introduction to Probability Theory and Its Applications, vol. 1, 3rd ed., (1968), Wiley, ISBN   0-471-25708-7.
  3. Probability Theory. The Britannica website.
  4. Hacking, Ian (1965). The Logic of Statistical Inference. Cambridge University Press. ISBN   978-0-521-05165-1.[ page needed ]
  5. Finetti, Bruno de (1970). "Logical foundations and measurement of subjective probability". Acta Psychologica. 34: 129–145. doi:10.1016/0001-6918(70)90012-0.
  6. Hájek, Alan (21 October 2002). Edward N. Zalta (ed.). "Interpretations of Probability". The Stanford Encyclopedia of Philosophy (Winter 2012 ed.). Retrieved 22 April 2013.
  7. Jaynes, E.T. (2003). "Section A.2 The de Finetti system of probability". In Bretthorst, G. Larry (ed.). Probability Theory: The Logic of Science (1 ed.). Cambridge University Press. ISBN   978-0-521-59271-0.
  8. Hogg, Robert V.; Craig, Allen; McKean, Joseph W. (2004). Introduction to Mathematical Statistics (6th ed.). Upper Saddle River: Pearson. ISBN   978-0-13-008507-8.[ page needed ]
  9. Jaynes, E.T. (2003). "Section 5.3 Converging and diverging views". In Bretthorst, G. Larry (ed.). Probability Theory: The Logic of Science (1 ed.). Cambridge University Press. ISBN   978-0-521-59271-0.
  10. 1 2 Hacking, I. (2006) The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference, Cambridge University Press, ISBN   978-0-521-68557-3 [ page needed ]
  11. Freund, John. (1973) Introduction to Probability. Dickenson ISBN   978-0-8221-0078-2 (p. 1)
  12. Jeffrey, R.C., Probability and the Art of Judgment, Cambridge University Press. (1992). pp. 54–55 . ISBN   0-521-39459-7
  13. Franklin, J. (2001) The Science of Conjecture: Evidence and Probability Before Pascal, Johns Hopkins University Press. (pp. 22, 113, 127)
  14. "Some laws and problems in classical probability and how Cardano anticipated them Gorrochum, P. Chance magazine 2012" (PDF).
  15. Abrams, William, A Brief History of Probability, Second Moment, archived from the original on 24 July 2017, retrieved 23 May 2008
  16. Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008). Quantum leap : from Dirac and Feynman, across the universe, to human body and mind. Singapore; Hackensack, NJ: World Scientific. p. 16. ISBN   978-981-281-927-7.
  17. Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal. Johns Hopkins University Press. ISBN   978-0-8018-6569-5.
  18. Shoesmith, Eddie (November 1985). "Thomas Simpson and the arithmetic mean". Historia Mathematica. 12 (4): 352–355. doi: 10.1016/0315-0860(85)90044-8 .
  19. 1 2 Wilson EB (1923) "First and second laws of error". Journal of the American Statistical Association, 18, 143
  20. Seneta, Eugene William. ""Adrien-Marie Legendre" (version 9)". StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. Archived from the original on 3 February 2016. Retrieved 27 January 2016.
  21. Weber, Richard. "Markov Chains" (PDF). Statistical Laboratory. University of Cambridge.
  22. Vitanyi, Paul M.B. (1988). "Andrei Nikolaevich Kolmogorov". CWI Quarterly (1): 3–18. Retrieved 27 January 2016.
  23. Wilcox, Rand R. (2016). Understanding and applying basic statistical methods using R. Hoboken, New Jersey. ISBN   978-1-119-06140-3. OCLC   949759319.{{cite book}}: CS1 maint: location missing publisher (link)
  24. Singh, Laurie (2010) "Whither Efficient Markets? Efficient Market Theory and Behavioral Finance". The Finance Professionals' Post, 2010.
  25. Edwards, Anthony William Fairbank (September 2012). "Reginald Crundall Punnett: First Arthur Balfour Professor of Genetics, Cambridge, 1912". Perspectives. Genetics . Gonville and Caius College, Cambridge, UK: Genetics Society of America. 192 (1): 3–13. doi:10.1534/genetics.112.143552. PMC   3430543 . PMID   22964834. pp. 5–6: […] Punnett's square seems to have been a development of 1905, too late for the first edition of his Mendelism (May 1905) but much in evidence in Report III to the Evolution Committee of the Royal Society [(Bateson et al. 1906b) "received March 16, 1906"]. The earliest mention is contained in a letter to Bateson from Francis Galton dated October 1, 1905 (Edwards 2012). We have the testimony of Bateson (1909, p. 57) that "For the introduction of this system [the 'graphic method'], which greatly simplifies difficult cases, I am indebted to Mr. Punnett." […] The first published diagrams appeared in 1906. […] when Punnett published the second edition of his Mendelism, he used a slightly different format ([…] Punnett 1907, p. 45) […] In the third edition (Punnett 1911, p. 34) he reverted to the arrangement […] with a description of the construction of what he called the "chessboard" method (although in truth it is more like a multiplication table). […] (11 pages)
  26. Gao, J.Z.; Fong, D.; Liu, X. (April 2011). "Mathematical analyses of casino rebate systems for VIP gambling". International Gambling Studies. 11 (1): 93–106. doi:10.1080/14459795.2011.552575. S2CID   144540412.
  27. Gorman, Michael F. (2010). "Management Insights". Management Science. 56: iv–vii. doi:10.1287/mnsc.1090.1132.
  28. Ross, Sheldon M. (2010). A First course in Probability (8th ed.). Pearson Prentice Hall. pp. 26–27. ISBN   9780136033134.
  29. 1 2 Weisstein, Eric W. "Probability". mathworld.wolfram.com. Retrieved 10 September 2020.
  30. Olofsson (2005) p. 8.
  31. Olofsson (2005), p. 9
  32. Olofsson (2005) p. 35.
  33. Olofsson (2005) p. 29.
  34. "Conditional probability with respect to a sigma-algebra". www.statlect.com. Retrieved 4 July 2022.
  35. Riedi, P.C. (1976). Kinetic Theory of Gases-I. In: Thermal Physics. Palgrave, London. https://doi.org/10.1007/978-1-349-15669-6_8
  36. Burgin, Mark (2010). "Interpretations of Negative Probabilities". p. 1. arXiv: 1008.1287v1 [physics.data-an].
  37. Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. Letter to Max Born, 4 December 1926, in: Einstein/Born Briefwechsel 1916–1955.
  38. Moore, W.J. (1992). Schrödinger: Life and Thought. Cambridge University Press. p. 479. ISBN   978-0-521-43767-7.

Bibliography