Problem of points

Last updated

The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory. One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value.

Contents

The problem concerns a game of chance with two players who have equal chances of winning each round. The players contribute equally to a prize pot, and agree in advance that the first player to have won a certain number of rounds will collect the entire prize. Now suppose that the game is interrupted by external circumstances before either player has achieved victory. How does one then divide the pot fairly? It is tacitly understood that the division should depend somehow on the number of rounds won by each player, such that a player who is close to winning will get a larger part of the pot. But the problem is not merely one of calculation; it also involves deciding what a "fair" division actually is.

Early solutions

Luca Pacioli considered such a problem in his 1494 textbook Summa de arithmetica, geometrica, proportioni et proportionalità . His method was to divide the stakes in proportion to the number of rounds won by each player, and the number of rounds needed to win did not enter his calculations at all. [1]

In the mid-16th century Niccolò Tartaglia noticed that Pacioli's method leads to counterintuitive results if the game is interrupted when only one round has been played. In that case, Pacioli's rule would award the entire pot to the winner of that single round, though a one-round lead early in a long game is far from decisive. Tartaglia constructed a method that avoids that particular problem by basing the division on the ratio between the size of the lead and the length of the game. [1] This solution is still not without problems, however; in a game to 100 it divides the stakes in the same way for a 65–55 lead as for a 99–89 lead, even though the former is still a relatively open game whereas in the latter situation victory for the leading player is almost certain. Tartaglia himself was unsure whether the problem was solvable at all in a way that would convince both players of its fairness: "in whatever way the division is made there will be cause for litigation". [2]

Pascal and Fermat

The problem arose again around 1654 when Chevalier de Méré posed it to Blaise Pascal. Pascal discussed the problem in his ongoing correspondence with Pierre de Fermat. Through this discussion, Pascal and Fermat not only provided a convincing, self-consistent solution to this problem, but also developed concepts that are still fundamental to probability theory.

The starting insight for Pascal and Fermat was that the division should not depend so much on the history of the part of the interrupted game that actually took place, as on the possible ways the game might have continued, were it not interrupted. It is intuitively clear that a player with a 7–5 lead in a game to 10 has the same chance of eventually winning as a player with a 17–15 lead in a game to 20, and Pascal and Fermat therefore thought that interruption in either of the two situations ought to lead to the same division of the stakes. In other words, what is important is not the number of rounds each player has won so far, but the number of rounds each player still needs to win in order to achieve overall victory.

Fermat now reasoned thus: [3] If one player needs more rounds to win and the other needs , the game will surely have been won by someone after additional rounds. Therefore, imagine that the players were to play more rounds; in total these rounds have different possible outcomes. In some of these possible futures the game will actually have been decided in fewer than rounds, but it does no harm to imagine the players continuing to play with no purpose. Considering only equally long futures has the advantage that one easily convinces oneself that each of the possibilities is equally likely. Fermat was thus able to compute the odds for each player to win, simply by writing down a table of all possible continuations and counting how many of them would lead to each player winning. Fermat now considered it obviously fair to divide the stakes in proportion to those odds.

Fermat's solution, certainly "correct" by today's standards, was improved by Pascal in two ways. First, Pascal produced a more elaborate argument why the resulting division should be considered fair. Second, he showed how to calculate the correct division more efficiently than Fermat's tabular method, which becomes completely impractical (without modern computers) if is more than about 10.

Instead of just considering the probability of winning the entire remaining game, Pascal devised a principle of smaller steps: Suppose that the players had been able to play just one more round before being interrupted, and that we already had decided how to fairly divide the stakes after that one more round (possibly because that round lets one of the players win). The imagined extra round may lead to one of two possible futures with different fair divisions of the stakes, but since the two players have even chances of winning the next round, they should split the difference between the two future divisions evenly. In this way knowledge of the fair solutions in games with fewer rounds remaining can be used to calculate fair solutions for games with more rounds remaining. [4]

It is easier to convince oneself that this principle is fair than it is for Fermat's table of possible futures, which are doubly hypothetical because one must imagine that the game sometimes continues after having been won. Pascal's analysis here is one of the earliest examples of using expected values instead of odds when reasoning about probability. Shortly after, this idea would become a basis for the first systematic treatise on probability by Christiaan Huygens. Later the modern concept of probability grew out of the use of expectation values by Pascal and Huygens.

The direct application of Pascal's step-by-step rule is significantly quicker than Fermat's method when many rounds remain. However, Pascal was able to use it as a starting point for developing more advanced computational methods. Through clever manipulation of identities involving what is today known as Pascal's triangle (including several of the first explicit proofs by induction) Pascal finally showed that in a game where player a needs r points to win and player b needs s points to win, the correct division of the stakes between player a (left side) and b (right side) is (using modern notation):

where the term represents the combination operator.

The problem of dividing the stakes became a major motivating example for Pascal in his Treatise on the arithmetic triangle. [4] [5]

Though Pascal's derivation of this result was independent of Fermat's tabular method, it is clear that it also describes exactly the counting of different outcomes of additional rounds that Fermat suggested.

Notes

  1. 1 2 Katz, Victor J. (1993). A history of mathematics. HarperCollins College Publishers. Section 11.3.1
  2. Tartaglia, quoted by Katz (op.cit.), from Oystein Ore, "Pascal and the Invention of Probability Theory", American Mathematical Monthly 67 (1960), 409–419, p.414.
  3. Pascal, letter to Fermat, quoted in F. N. David (1962) Games, Gods, and Gambling, Griffin Press, p. 239.
  4. 1 2 Katz, op.cit., Section 11.3.2
  5. Pascal, Blaise (1665). Traité du triangle arithmétique. Digital facsimile Archived 2004-08-03 at the Wayback Machine at the Cambridge University Library (in French) with short English summary

Related Research Articles

<span class="mw-page-title-main">Binomial distribution</span> Probability distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

<span class="mw-page-title-main">Binomial coefficient</span> Number of subsets of a given size

In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers nk ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula

In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial (x + y)n into a sum involving terms of the form axbyc, where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending on n and b. For example, for n = 4,

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in Persia, India, China, Germany, and Italy.

In mathematics, a Fermat number, named after Pierre de Fermat, who first studied them, is a positive integer of the form

<span class="mw-page-title-main">Hypergeometric distribution</span> Discrete probability distribution

In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes in draws, without replacement, from a finite population of size that contains exactly objects with that feature, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of successes in draws with replacement.

A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy. Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests.

In statistics, gambler's ruin is most commonly expressed as meaning that a gambler playing a game with negative expected value will eventually go broke, regardless of their betting system.

The classical definition or interpretation of probability is identified with the works of Jacob Bernoulli and Pierre-Simon Laplace. As stated in Laplace's Théorie analytique des probabilités,

In statistics, the binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data.

In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives p votes and candidate B receives q votes with p > q, what is the probability that A will be strictly ahead of B throughout the count?" The answer is

Antoine Gombaud, alias Chevalier de Méré, was a French writer, born in Poitou. Although he was not a nobleman, he adopted the title chevalier (knight) for the character in his dialogues who represented his own views. Later his friends began calling him by that name.

In the context of combinatorial mathematics, stars and bars is a graphical aid for deriving certain combinatorial theorems. It was popularized by William Feller in his classic book on probability. It can be used to solve many simple counting problems, such as how many ways there are to put n indistinguishable balls into k distinguishable bins.

<i>Ars Conjectandi</i> 1713 book on probability and combinatorics by Jacob Bernoulli

Ars Conjectandi is a book on combinatorics and mathematical probability written by Jacob Bernoulli and published in 1713, eight years after his death, by his nephew, Niklaus Bernoulli. The seminal work consolidated, apart from many combinatorial topics, many central ideas in probability theory, such as the very first version of the law of large numbers: indeed, it is widely regarded as the founding work of that subject. It also addressed problems that today are classified in the twelvefold way and added to the subjects; consequently, it has been dubbed an important historical landmark in not only probability but all combinatorics by a plethora of mathematical historians. The importance of this early work had a large impact on both contemporary and later mathematicians; for example, Abraham de Moivre.

The Newton–Pepys problem is a probability problem concerning the probability of throwing sixes from a certain number of dice.

Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically older in, for example, the law of evidence, while the mathematical treatment of dice began with the work of Cardano, Pascal, Fermat and Christiaan Huygens between the 16th and 17th century.

<span class="mw-page-title-main">Hockey-stick identity</span> Recurrence relations of binomial coefficients in Pascals triangle

In combinatorial mathematics, the hockey-stick identity, Christmas stocking identity, boomerang identity, Fermat's identity or Chu's Theorem, states that if are integers, then

References