Gambler's ruin

Last updated

In statistics, gambler's ruin is the fact that a gambler playing a game with negative expected value will eventually go broke, regardless of his betting system.

Contents

The concept was initially stated: A persistent gambler who raises his bet to a fixed fraction of the gambler's bankroll after a win, but does not reduce it after a loss, will eventually and inevitably go broke, even if each bet has a positive expected value. [1]

Another statement of the concept is that a persistent gambler with finite wealth, playing a fair game (that is, each bet has expected value of zero to both sides) will eventually and inevitably go broke against an opponent with infinite wealth. Such a situation can be modeled by a random walk on the real number line. In that context, it is probable that the gambler will, with virtual certainty, return to his point of origin, which means going broke, and is ruined an infinite number of times if the random walk continues forever. This is a corollary of a general theorem by Christiaan Huygens, which is also known as gambler's ruin. That theorem shows how to compute the probability of each player winning a series of bets that continues until one's entire initial stake is lost, given the initial stakes of the two players and the constant probability of winning. This is the oldest mathematical idea that goes by the name gambler's ruin, but not the first idea to which the name was applied. The term's common usage today is another corollary to Huygens's result.

The concept has specific relevance for gamblers. However it also leads to mathematical theorems with wide application and many related results in probability and statistics. Huygens's result in particular led to important advances in the mathematical theory of probability.

History

The earliest known mention of the gambler's ruin problem is a letter from Blaise Pascal to Pierre Fermat in 1656 (two years after the more famous correspondence on the problem of points). [2] Pascal's version was summarized in a 1656 letter from Pierre de Carcavi to Huygens:

Let two men play with three dice, the first player scoring a point whenever 11 is thrown, and the second whenever 14 is thrown. But instead of the points accumulating in the ordinary way, let a point be added to a player's score only if his opponent's score is nil, but otherwise let it be subtracted from his opponent's score. It is as if opposing points form pairs, and annihilate each other, so that the trailing player always has zero points. The winner is the first to reach twelve points; what are the relative chances of each player winning? [3]

Huygens reformulated the problem and published it in De ratiociniis in ludo aleae ("On Reasoning in Games of Chance", 1657):

Problem (2-1) Each player starts with 12 points, and a successful roll of the three dice for a player (getting an 11 for the first player or a 14 for the second) adds one to that player's score and subtracts one from the other player's score; the loser of the game is the first to reach zero points. What is the probability of victory for each player? [4]

This is the classic gambler's ruin formulation: two players begin with fixed stakes, transferring points until one or the other is "ruined" by getting to zero points. However, the term "gambler's ruin" was not applied until many years later. [5]

The gambler's ruin problem is often applied to gamblers with finite capital playing against a bookie or casino assumed to have an “infinite” or much larger amount of capital available. It can then be proven that the probability of the gambler's eventual ruin tends to 1 even in the scenario where the game is fair (martingale). [6]

Reasons for the four results

Let be the amount of money a gambler has at his disposal at any moment, and let be any positive integer. Suppose that he raises his stake to when he wins, but does not reduce his stake when he loses (this general pattern is not uncommon among real gamblers). Under this betting scheme, it will take at most N losing bets in a row to bankrupt him. If his probability of winning each bet is less than 1 (if it is 1, then he is no gambler), he is virtually certain to eventually lose N bets in a row, however big N is. It is not necessary that he follow the precise rule, just that he increase his bet fast enough as he wins. This is true even if the expected value of each bet is positive.

The gambler playing a fair game (with probability of winning) will eventually either go broke or double his wealth. By symmetry, he has a chance of going broke before doubling his money. If he doubles his money, he repeats this process and he again has a chance of doubling his money before going broke. After the second process, he has a chance he has not gone broke yet. Continuing this way, his chance of not going broke after processes is , which approaches , and his chance of going broke after successive processes is which approaches .

Huygens's result is illustrated in the next section.

The eventual fate of a player at a game with negative expected value cannot be better than the player at a fair game, so he will go broke as well.

Example of Huygens's result

Fair coin flipping

Consider a coin-flipping game with two players where each player has a 50% chance of winning with each flip of the coin. After each flip of the coin the loser transfers one penny to the winner. The game ends when one player has all the pennies.

If there are no other limitations on the number of flips, the probability that the game will eventually end this way is 1. (One way to see this is as follows. Any given finite string of heads and tails will eventually be flipped with certainty: the probability of not seeing this string, while high at first, decays exponentially. In particular, the players would eventually flip a string of heads as long as the total number of pennies in play, by which time the game must have already ended.)

If player one has n1 pennies and player two n2 pennies, the probabilities P1 and P2 that players one and two, respectively, will end penniless are:

Two examples of this are if one player has more pennies than the other; and if both players have the same number of pennies. In the first case say player one has 8 pennies and player two () were to have 5 pennies then the probability of each losing is:

It follows that even with equal odds of winning the player that starts with fewer pennies is more likely to fail.

In the second case where both players have the same number of pennies (in this case 6) the likelihood of each losing is:

Unfair coin flipping

In the event of an unfair coin, where player one wins each toss with probability p, and player two wins with probability q = 1  p, then the probability of each ending penniless is:

Simulations for player
1
{\displaystyle 1}
with
P
=
0.6
{\displaystyle P=0.6}
starting with
5
{\displaystyle 5}
pennies and player
2
{\displaystyle 2}
with
10
{\displaystyle 10}
. The probability of this stochastic process hitting level
15
{\displaystyle 15}
prior to
0
{\displaystyle 0}
is
59049
67849
[?]
0.8703
{\displaystyle {\frac {59049}{67849}}\approx 0.8703}
and the sloped line depicts the expected value around which most of the probability mass is clustered. The variance of a Bernoulli process i.e. a binomial distribution is
n
p
(
1
-
p
)
=
n
p
q
{\displaystyle np(1-p)=npq}
and proportion
p
q
n
{\displaystyle {\frac {pq}{n}}}
. Gambler's Victoire.png
Simulations for player with starting with pennies and player with . The probability of this stochastic process hitting level prior to is and the sloped line depicts the expected value around which most of the probability mass is clustered. The variance of a Bernoulli process i.e. a binomial distribution is and proportion .

An argument is that the expected hitting time is finite and so with a martingale, associating the value with each state so that the expected value of the state is constant, this is the solution to the system of equations:

Alternately, this can be shown as follows: Consider the probability of player 1 experiencing gamblers ruin having started with amount of money, . Then, using the Law of Total Probability, we have

where W denotes the event that player 1 wins the first bet. Then clearly and . Also is the probability that player 1 experiences gambler's ruin having started with amount of money: ; and is the probability that player 1 experiences gambler's ruin having started with amount of money: .

Denoting , we get the linear homogeneous recurrence relation

which we can solve using the fact that (i.e. the probability of gambler's ruin given that player 1 starts with no money is 1), and (i.e. the probability of gambler's ruin given that player 1 starts with all the money is 0.) For a more detailed description of the method see e.g. Feller (1970), An introduction to probability theory and its applications, 3rd ed.

N-player ruin problem

The above-described problem (2 players) is a special case of the so-called N-Player Ruin problem. [7] Here players with initial capital dollars, respectively, play a sequence of (arbitrary) independent games and win and lose certain amounts of dollars from and to each other according to fixed rules. The sequence of games ends as soon as at least one player is ruined. Standard Markov chain methods can be applied to solve this more general problem in principle, but the computations quickly become prohibitive as soon as the number of players or their initial capitals increase. For and large initial capitals the solution can be well approximated by using two-dimensional Brownian motion. (For this is not possible.) In practice the true problem is to find the solution for the typical cases of and limited initial capital. Swan (2006) proposed an algorithm based on matrix-analytic methods (Folding Algorithm for ruin problems) which significantly reduces the order of the computational task in such cases.

See also

Notes

  1. Coolidge, J. L. (1909). "The Gambler's Ruin". Annals of Mathematics. 10 (4): 181–192. doi:10.2307/1967408. ISSN   0003-486X. JSTOR   1967408.
  2. David, Florence Nightingale (1998). Games, Gods, and Gambling: A History of Probability and Statistical Ideas. Courier Dover Publications. ISBN   978-0486400235.
  3. Edwards, J. W. F. (April 1983). "Pascal's Problem: The 'Gambler's Ruin'". Revue Internationale de Statistique. 51 (1): 73–79. doi:10.2307/1402732. JSTOR   1402732.
  4. Jan Gullberg, Mathematics from the birth of numbers, W. W. Norton & Company; ISBN   978-0-393-04002-9
  5. Kaigh, W. D. (April 1979). "An attrition problem of gambler's ruin". Mathematics Magazine. 52: 22–25. doi:10.1080/0025570X.1979.11976744.
  6. "12.2: Gambler's Ruin". Statistics LibreTexts. 2018-06-25. Retrieved 2023-10-28.
  7. Rocha, Amy L.; Stern, Frederick (1999-08-01). "The gambler's ruin problem with n players and asymmetric play". Statistics & Probability Letters. 44 (1): 87–95. doi:10.1016/S0167-7152(98)00295-8. ISSN   0167-7152.

Related Research Articles

<span class="mw-page-title-main">Binomial distribution</span> Probability distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

<span class="texhtml mvar" style="font-style:italic;">e</span> (mathematical constant) Constant value used in mathematics

The number e is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of the natural logarithm function. It is the limit of as n tends to infinity, an expression that arises in the computation of compound interest. It is the value at 1 of the (natural) exponential function, commonly denoted It is also the sum of the infinite series

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Cycloid</span> Curve traced by a point on a rolling circle

In geometry, a cycloid is the curve traced by a point on a circle as it rolls along a straight line without slipping. A cycloid is a specific form of trochoid and is an example of a roulette, a curve generated by a curve rolling on another curve.

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on some dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

<span class="mw-page-title-main">Birthday problem</span> Probability of shared birthdays

In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. The birthday paradox refers to the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.

In probability theory, odds provide a measure of the likelihood of a particular outcome. When specific events are equally likely, odds are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics.

<span class="mw-page-title-main">Hardy–Weinberg principle</span> Principle in genetics

In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that allele and genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences. These influences include genetic drift, mate choice, assortative mating, natural selection, sexual selection, mutation, gene flow, meiotic drive, genetic hitchhiking, population bottleneck, founder effect,inbreeding and outbreeding depression.

<span class="mw-page-title-main">St. Petersburg paradox</span> Paradox involving a game with repeated coin flipping

The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided dice rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

In statistical mechanics, the grand canonical ensemble is the statistical ensemble that is used to represent the possible states of a mechanical system of particles that are in thermodynamic equilibrium with a reservoir. The system is said to be open in the sense that the system can exchange energy and particles with a reservoir, so that various possible states of the system can differ in both their total energy and total number of particles. The system's volume, shape, and other external coordinates are kept the same in all possible states of the system.

In mathematics, an ordinary differential equation is called a Bernoulli differential equation if it is of the form

In geometry, the circumscribed circle or circumcircle of a triangle is a circle that passes through all three vertices. The center of this circle is called the circumcenter of the triangle, and its radius is called the circumradius. The circumcenter is the point of intersection between the three perpendicular bisectors of the triangle's sides, and is a triangle center.

<span class="mw-page-title-main">Kelly criterion</span> Formula for bet sizing that maximizes the expected logarithmic value

In probability theory, the Kelly criterion is a formula for sizing a bet. The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. Assuming that the expected returns are known, the Kelly criterion leads to higher wealth than any other strategy in the long run. J. L. Kelly Jr, a researcher at Bell Labs, described the criterion in 1956.

The mathematics of gambling is a collection of probability applications encountered in games of chance and can get included in game theory. From a mathematical point of view, the games of chance are experiments generating various types of aleatory events, and it is possible to calculate by using the properties of probability on a finite space of possibilities.

<span class="mw-page-title-main">German tank problem</span> Mathematical problem

In the statistical theory of estimation, the German tank problem consists of estimating the maximum of a discrete uniform distribution from sampling without replacement. In simple terms, suppose there exists an unknown number of items which are sequentially numbered from 1 to N. A random sample of these items is taken and their sequence numbers observed; the problem is to estimate N from these observed numbers.

In decision theory, the odds algorithm is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds strategy, and the importance of the odds strategy lies in its optimality, as explained below.

In mathematics, arithmetico-geometric sequence is the result of term-by-term multiplication of a geometric progression with the corresponding terms of an arithmetic progression. Put plainly, the nth term of an arithmetico-geometric sequence is the product of the nth term of an arithmetic sequence and the nth term of a geometric one. Arithmetico-geometric sequences arise in various applications, such as the computation of expected values in probability theory. For instance, the sequence

References