Gambler's ruin

Last updated

In statistics, gambler's ruin is the fact that a gambler playing a game with negative expected value will eventually go bankrupt, regardless of their betting system.

Contents

The concept was initially stated: A persistent gambler who raises his bet to a fixed fraction of the gambler's bankroll after a win, but does not reduce it after a loss, will eventually and inevitably go broke, even if each bet has a positive expected value. [1]

Another statement of the concept is that a persistent gambler with finite wealth, playing a fair game (that is, each bet has expected value of zero to both sides) will eventually and inevitably go broke against an opponent with infinite wealth. Such a situation can be modeled by a random walk on the real number line. In that context, it is probable that the gambler will, with virtual certainty, return to their point of origin, which means going broke, and is ruined an infinite number of times if the random walk continues forever. This is a corollary of a general theorem by Christiaan Huygens, which is also known as gambler's ruin. That theorem shows how to compute the probability of each player winning a series of bets that continues until one's entire initial stake is lost, given the initial stakes of the two players and the constant probability of winning. This is the oldest mathematical idea that goes by the name gambler's ruin, but not the first idea to which the name was applied. The term's common usage today is another corollary to Huygens's result.

The concept has specific relevance for gamblers. However it also leads to mathematical theorems with wide application and many related results in probability and statistics. Huygens's result in particular led to important advances in the mathematical theory of probability.

History

The earliest known mention of the gambler's ruin problem is a letter from Blaise Pascal to Pierre Fermat in 1656 (two years after the more famous correspondence on the problem of points). [2] Pascal's version was summarized in a 1656 letter from Pierre de Carcavi to Huygens:

Let two men play with three dice, the first player scoring a point whenever 11 is thrown, and the second whenever 14 is thrown. But instead of the points accumulating in the ordinary way, let a point be added to a player's score only if his opponent's score is nil, but otherwise let it be subtracted from his opponent's score. It is as if opposing points form pairs, and annihilate each other, so that the trailing player always has zero points. The winner is the first to reach twelve points; what are the relative chances of each player winning? [3]

Huygens reformulated the problem and published it in De ratiociniis in ludo aleae ("On Reasoning in Games of Chance", 1657):

Problem (2-1) Each player starts with 12 points, and a successful roll of the three dice for a player (getting an 11 for the first player or a 14 for the second) adds one to that player's score and subtracts one from the other player's score; the loser of the game is the first to reach zero points. What is the probability of victory for each player? [4]

This is the classic gambler's ruin formulation: two players begin with fixed stakes, transferring points until one or the other is "ruined" by getting to zero points. However, the term "gambler's ruin" was not applied until many years later. [5]

The gambler's ruin problem is often applied to gamblers with finite capital playing against a bookie or casino assumed to have an “infinite” or much larger amount of capital available. It can then be proven that the probability of the gambler's eventual ruin tends to 1 even in the scenario where the game is fair or what mathematically is defined as a martingale. [6]

Reasons for the four results

Let be the amount of money a gambler has at their disposal at any moment, and let be any positive integer. Suppose that they raise their stake to when they win, but do not reduce their stake when they lose (a not uncommon pattern among real gamblers). Under this betting scheme, it will take at most N losing bets in a row to bankrupt them. If their probability of winning each bet is less than 1 (if it is 1, then they are no gambler), they are virtually certain to eventually lose N bets in a row, however big N is. It is not necessary that they follow the precise rule, just that they increase their bet fast enough as they win. This is true even if the expected value of each bet is positive.

The gambler playing a fair game (with probability of winning) will eventually either go broke or double their wealth. By symmetry, they have a chance of going broke before doubling their money. If they double their money, they repeat this process and they again have a chance of doubling their money before going broke. After the second process, they have a chance that they have not gone broke yet. Continuing this way, their chance of not going broke after processes is , which approaches , and their chance of going broke after successive processes is which approaches .

Huygens's result is illustrated in the next section.

The eventual fate of a player at a game with negative expected value cannot be better than the player at a fair game, so they will go broke as well.

Example of Huygens's result

Fair coin flipping

Consider a coin-flipping game with two players where each player has a 50% chance of winning with each flip of the coin. After each flip of the coin the loser transfers one penny to the winner. The game ends when one player has all the pennies.

If there are no other limitations on the number of flips, the probability that the game will eventually end this way is 1. (One way to see this is as follows. Any given finite string of heads and tails will eventually be flipped with certainty: the probability of not seeing this string, while high at first, decays exponentially. In particular, the players would eventually flip a string of heads as long as the total number of pennies in play, by which time the game must have already ended.)

If player one has n1 pennies and player two n2 pennies, the probabilities P1 and P2 that players one and two, respectively, will end penniless are:

Two examples of this are if one player has more pennies than the other; and if both players have the same number of pennies. In the first case say player one has 8 pennies and player two () were to have 5 pennies then the probability of each losing is:

It follows that even with equal odds of winning the player that starts with fewer pennies is more likely to fail.

In the second case where both players have the same number of pennies (in this case 6) the likelihood of each losing is:

Unfair coin flipping

In the event of an unfair coin, where player one wins each toss with probability p, and player two wins with probability q = 1  p, then the probability of each ending penniless is:

Simulations for player
1
{\displaystyle 1}
with
P
=
0.6
{\displaystyle P=0.6}
starting with
5
{\displaystyle 5}
pennies and player
2
{\displaystyle 2}
with
10
{\displaystyle 10}
. The probability of this stochastic process hitting level
15
{\displaystyle 15}
prior to
0
{\displaystyle 0}
is
59049
67849
[?]
0.8703
{\displaystyle {\frac {59049}{67849}}\approx 0.8703}
and the sloped line depicts the expected value around which most of the probability mass is clustered. The variance of a Bernoulli process i.e. a binomial distribution is
n
p
(
1
-
p
)
=
n
p
q
{\displaystyle np(1-p)=npq}
and proportion
p
q
n
{\displaystyle {\frac {pq}{n}}}
. Gambler's Victoire.png
Simulations for player with starting with pennies and player with . The probability of this stochastic process hitting level prior to is and the sloped line depicts the expected value around which most of the probability mass is clustered. The variance of a Bernoulli process i.e. a binomial distribution is and proportion .

An argument is that the expected hitting time is finite and so with a martingale, associating the value with each state so that the expected value of the state is constant, this is the solution to the system of equations:

Alternately, this can be shown as follows: Consider the probability of player 1 experiencing gamblers ruin having started with amount of money, . Then, using the Law of Total Probability, we have

where W denotes the event that player 1 wins the first bet. Then clearly and . Also is the probability that player 1 experiences gambler's ruin having started with amount of money: and is the probability that player 1 experiences gambler's ruin having started with amount of money. Denoting , we get the linear homogeneous recurrence relation

which we can solve using the fact that (i.e. the probability of gambler's ruin given that player 1 starts with no money is 1), and (i.e. the probability of gambler's ruin given that player 1 starts with all the money is 0.) For a more detailed description of the method see e.g. Feller (1970), An introduction to probability theory and its applications, 3rd ed.

N-player ruin problem

The above-described problem (2 players) is a special case of the so-called N-Player Ruin problem. [7] Here players with initial capital dollars, respectively, play a sequence of (arbitrary) independent games and win and lose certain amounts of dollars from and to each other according to fixed rules. The sequence of games ends as soon as at least one player is ruined. Standard Markov chain methods can be applied to solve this more general problem in principle, but the computations quickly become prohibitive as soon as the number of players or their initial capitals increase. For and large initial capitals the solution can be well approximated by using two-dimensional Brownian motion. (For this is not possible.) In practice the true problem is to find the solution for the typical cases of and limited initial capital. Swan (2006) proposed an algorithm based on matrix-analytic methods (Folding Algorithm for ruin problems) which significantly reduces the order of the computational task in such cases.

See also

Notes

  1. Coolidge, J. L. (1909). "The Gambler's Ruin". Annals of Mathematics. 10 (4): 181–192. doi:10.2307/1967408. ISSN   0003-486X. JSTOR   1967408.
  2. David, Florence Nightingale (1998). Games, Gods, and Gambling: A History of Probability and Statistical Ideas. Courier Dover Publications. ISBN   978-0486400235.
  3. Edwards, J. W. F. (April 1983). "Pascal's Problem: The 'Gambler's Ruin'". Revue Internationale de Statistique. 51 (1): 73–79. doi:10.2307/1402732. JSTOR   1402732.
  4. Jan Gullberg, Mathematics from the birth of numbers, W. W. Norton & Company; ISBN   978-0-393-04002-9
  5. Kaigh, W. D. (April 1979). "An attrition problem of gambler's ruin". Mathematics Magazine. 52: 22–25. doi:10.1080/0025570X.1979.11976744.
  6. "12.2: Gambler's Ruin". Statistics LibreTexts. 2018-06-25. Retrieved 2023-10-28.
  7. Rocha, Amy L.; Stern, Frederick (1999-08-01). "The gambler's ruin problem with n players and asymmetric play". Statistics & Probability Letters. 44 (1): 87–95. doi:10.1016/S0167-7152(98)00295-8. ISSN   0167-7152.

Related Research Articles

<span class="texhtml mvar" style="font-style:italic;">e</span> (mathematical constant) Constant value used in mathematics

The number e is a mathematical constant approximately equal to 2.71828 that is the base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted . Alternatively, e can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest.

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Roulette</span> Casino game of chance

Roulette is a casino game which was likely developed from the Italian game Biribi. In the game, a player may choose to place a bet on a single number, various groupings of numbers, the color red or black, whether the number is odd or even, or if the number is high or low.

<span class="mw-page-title-main">Birthday problem</span> Probability of shared birthdays

In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share the same birthday. The birthday paradox refers to the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.

Lambert <i>W</i> function Multivalued function in mathematics

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.

In probability theory, odds provide a measure of the probability of a particular outcome. Odds are commonly used in gambling and statistics. For example for an event that is 40% probable, one could say that the odds are "2 in 5","2 to 3 in favor", or "3 to 2 against".

<span class="mw-page-title-main">Error function</span> Sigmoid shape special function

In mathematics, the error function, often denoted by erf, is a function defined as:

In mathematics, a real number is said to be simply normal in an integer base b if its infinite sequence of digits is distributed uniformly in the sense that each of the b digit values has the same natural density 1/b. A number is said to be normal in base b if, for every positive integer n, all possible strings n digits long have density bn.

<span class="mw-page-title-main">Hardy–Weinberg principle</span> Principle in genetics

In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that allele and genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences. These influences include genetic drift, mate choice, assortative mating, natural selection, sexual selection, mutation, gene flow, meiotic drive, genetic hitchhiking, population bottleneck, founder effect,inbreeding and outbreeding depression.

<span class="mw-page-title-main">Martingale (probability theory)</span> Model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.

<span class="mw-page-title-main">St. Petersburg paradox</span> Paradox involving a game with repeated coin flipping

The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the lottery game is infinite but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.

In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how one reference probability distribution P is different from a second probability distribution Q. Mathematically, it is defined as

In mathematics, an ordinary differential equation is called a Bernoulli differential equation if it is of the form

In statistics, the delta method is a method of deriving the asymptotic distribution of a random variable. It is applicable when the random variable being considered can be defined as a differentiable function of a random variable which is asymptotically Gaussian.

<span class="mw-page-title-main">Kelly criterion</span> Bet sizing formula for long-term growth

In probability theory, the Kelly criterion is a formula for sizing a sequence of bets by maximizing the long-term expected value of the logarithm of wealth, which is equivalent to maximizing the long-term expected geometric growth rate. John Larry Kelly Jr., a researcher at Bell Labs, described the criterion in 1956.

The mathematics of gambling is a collection of probability applications encountered in games of chance and can get included in game theory. From a mathematical point of view, the games of chance are experiments generating various types of aleatory events, and it is possible to calculate by using the properties of probability on a finite space of possibilities.

<span class="mw-page-title-main">Coupon collector's problem</span> Problem in probability theory

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.

The Newton–Pepys problem is a probability problem concerning the probability of throwing sixes from a certain number of dice.

Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory. It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities. Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory.

In mathematics, an arithmetico-geometric sequence is the result of element-by-element multiplication of the elements of a geometric progression with the corresponding elements of an arithmetic progression. The nth element of an arithmetico-geometric sequence is the product of the nth element of an arithmetic sequence and the nth element of a geometric sequence. An arithmetico-geometric series is a sum of terms that are the elements of an arithmetico-geometric sequence. Arithmetico-geometric sequences and series arise in various applications, such as the computation of expected values in probability theory, especially in Bernoulli processes.

References