We investigate the classical problem of a gambler repeatedly betting $1 on the flip of a potentially biased coin until he either loses all his money or wins the money of his opponent. This is then extended to the case of his adversary (a casino) having practically unlimited resources and used to derive the inverse Gaussian distribution of the first passage time.

### Probability of Winning the Game

We assume that the probability of winning a single round of this game is , which is usually close, but not necessarily equal, to . Thus, for example, when betting $1 on black in the game of roulette, with 18 black, 18 red, and 1 green, all with equally likely outcomes, is equal to . We also assume that our player and his adversary start with dollars and dollars, respectively, and that they are both determined to continue playing until one of them goes broke.

Our first objective is to find the probability that our player wins the game, ending up with dollars. To do this, we have to imagine that the game has been going on for some time, and the player has reached the point of having exactly dollars in his pocket, so that his opponent has . Given that, we denote our player’s probability of winning the game by . If one more round is played, one can see that

(1) |

depending on whether the player wins (the first term) or loses (the second one) the next round [1]. The end conditions are (the game is already won) and (the game is lost). We thus have linear equations for the corresponding probabilities, where . The equations are of a rather special type, as each of them involves only three consecutive values of , so rather than using the usual `LinearSolve` procedure, it is more appropriate to switch to `RSolve`.

When , the solution is indeterminate, but we get the correct answer in the limit.

All we have to do now is to evaluate using . To this end, we build the following function.

We apply the function to the cases discussed so far.

From these examples we can see that when both players start with the same amount of money and the coin is fair, each of them can win with the same probability of 50%. But as soon as the coin is slightly biased, the probability of the disadvantaged player winning decreases; this disparity becomes more pronounced as the initial capital of both players increases.

In practice, things are even worse than that: the casino starts with practically unlimited resources, which implies that a disadvantaged player does not stand any chance of winning whenever . Should be slightly higher than though, his chances to continue winning indefinitely are computed by taking the limit of as , resulting in

(2) |

### Distribution of the Game’s Duration

To find the distribution of the random number of rounds it takes to complete a game with two players, each with finite resources, given that currently the first player has dollars in his pocket, we introduce the corresponding probability generating function . Similarly to how we solved for , we can now set up the equations

(3) |

where the multiplication by is necessary to account for the extra round played. We also know that (a probability generating function of 1 indicates that there are no more rounds to be played).

We now solve these equations.

Substituting for yields the corresponding result.

Using this function, we can now easily find and display the corresponding distribution of the game’s duration.

One can see that these games can easily last hundreds of rounds (things get worse when approaches ).

Sometimes it is sufficient to know only the mean and variance of this distribution.

The correctness of these formulas can be verified by recalling that the mean of a distribution is given by evaluated at , and its variance similarly by .

The two formulas can be extended to the case of by taking the corresponding limit.

These impose the following.

#### The Case of an Infinitely Rich Adversary

By analyzing the local variables and in the definition of , we can see that and , for any . As , we thus get

(4) |

When , the probability that the game finishes in finite time is , according to (2). The *conditional* probability generating function of the total number of rounds, given that the game does not continue indefinitely, is thus

(5) |

The more interesting case is when , which implies that the game cannot continue indefinitely. Expanding (4) in powers of would enable us to plot the corresponding distribution, as was done at the beginning of this section; we leave this as an exercise. Here is the mean and variance of the total number of rounds.

### Brownian Motion

In this last section, we explore yet another interesting limit of the gambler’s ruin problem. First we assume that the game is happening in “real time” and that rounds are played during each hour (or any other unit of time). We also assume that

(6) |

where and are two constants, and that dollars are bet in each round instead of the original $1. In the limit as , this results in a new process called Brownian motion that runs in real (i.e. continuous) time and whose values constitute a continuous, even though nowhere differentiable, function of [2].

To visualize its possible course, we take , 2, and and display a random realization of the process; that is, we display its current values—the amount of money in the gambler’s pocket—as a function of during the next 50 hours, taking the initial value to be $100.

Assuming the gambler can borrow money and continue playing even when the value of the process (his current worth) is negative, it is easy to see that this value at time is a normally distributed random variable with mean and variance , where is the initial value, and and (introduced earlier) are the so-called *drift* and *diffusion* coefficients, respectively. This assertion can be proved by realizing that the moment-generating function of the net win (equal to 1 with probability and to with probability ) in one round of the original game is . To adjust this to the new game (winning or losing dollars, instead of $1), replace each of the last expression by . To get the moment-generating function of the total win (or loss, when negative) accumulated during a time interval of length , we have to add the results of independent rounds, which corresponds to raising to the power of . Finally, we need to take the limit as .

This is the moment-generating function of a normal distribution with expected value (the exponent’s coefficient of ) and variance of (the coefficient of ). Since it represents only the total net *winnings* up to time , we have to add the initial value of to get the distribution of the gambler’s net worth at time ; this will increase the expected value to .

#### Inverse Gaussian Distribution

We now establish the distribution of , the first passage time through the value of zero, which is the time at which the gambler has to stop playing, since he has lost all his money. We assume that he starts with the amount and that ; reaching the value of zero in finite time is thus guaranteed.

We proceed similarly to deriving the distribution of his net worth: we first find the moment-generating function of in terms of the old game, with rounds played every hour, and then take the limit as .

To get the moment-generating function of the number of rounds of the old game needed to go broke, we simply replace in (4) by , and by , the initial amount expressed in the new monetary units. To convert the result into hours, each consisting of rounds, we need to replace by .

The resulting moment-generating function can be rewritten in the following simple form, using only two parameters: and (since , both are positive):

(7) |

The corresponding probability density function is

(8) |

for and otherwise. To prove this, we compute the moment-generating function of (8), which agrees with (7).

The corresponding distribution is called the inverse Gaussian; it has mean and variance .

Its distribution function of cumulative probabilities can be expressed in terms of the distribution function of the standard normal distribution (denoted ) as:

(9) |

We verify this, getting an expression that agrees with (8).

Applying the inverse Gaussian distribution to our previous example (, , and ), we now display the distribution of time it takes the corresponding Brownian motion to reach zero for the first time.

One last issue: to find the probability density function of when , we need to take the limit of (8) as .

This distribution has an infinite mean and variance.

### Conclusion

The study of first passage times and their distributions is an important area of research. The author hopes that this article has provided a useful introduction to the topic.

### References

[1] | W. Feller, An Introduction to Probability Theory and Its Applications, Vol. I, 3rd ed., New York: Wiley, 1968. |

[2] | M. S. Bartlett, An Introduction to Stochastic Processes, with Special Reference to Methods and Applications, 2nd ed., Cambridge: Cambridge University Press, 1966. |

J. Vrbik, “Gambler’s Ruin and First Passage Time,” The Mathematica Journal, 2012. dx.doi.org/doi:10.3888/tmj.14-7. |

### About the Author

**Jan Vrbik**

*Department of Mathematics, Brock University
500 Glenridge Ave., St. Catharines
Ontario, Canada, L2S 3A1
*

*jvrbik@brocku.ca*