Coupon collector's problem

Last updated
Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T ) Coupon collector problem.svg
Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T )

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . [lower-alpha 1] For example, when n = 50 it takes about 225 [lower-alpha 2] trials on average to collect all 50 coupons.

Contents

Solution

Via generating functions

By definition of Stirling numbers of the second kind, the probability that exactly T draws are needed is

By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T:

In general, the k-th moment is , where is the derivative operator . For example, the 0th moment is

and the 1st moment is , which can be explicitly evaluated to , etc.

Calculating the expectation

Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i  1 coupons have been collected. Then . Think of T and ti as random variables. Observe that the probability of collecting a new coupon is . Therefore, has geometric distribution with expectation . By the linearity of expectations we have:

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:

where is the Euler–Mascheroni constant.

Using the Markov inequality to bound the desired probability:

The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:

And when then we get the original result.

Calculating the variance

Using the independence of random variables ti, we obtain:

since (see Basel problem).

Bound the desired probability using the Chebyshev inequality:

Tail estimates

A stronger tail estimate for the upper tail be obtained as follows. Let denote the event that the -th coupon was not picked in the first trials. Then

Thus, for , we have . Via a union bound over the coupons, we obtain

Extensions and generalizations

which is a Gumbel distribution. A simple proof by martingales is in the next section.
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
This is equal to
where m denotes the number of coupons to be collected and PJ denotes the probability of getting any coupon in the set of coupons J.

Martingales

This section is based on [3] .

Define a discrete random process by letting be the number of coupons not yet seen after draws. The random process is just a sequence generated by a Markov chain with states , and transition probabilities

Now define

then it is a martingale, since

Consequently, we have . In particular, we have a limit law for any . This suggests to us a limit law for . More generally, each is a martingale process, which allows us to calculate all moments of . For example,

giving another limit law . More generally,

meaning that has all moments converging to constants, so it converges to some probability distribution on . Let be the random variable with the limit distribution. We have

By introducing a new variable , we can sum up both sides explicitly:

giving .

At the limit, we have , which is precisely what the limit law states.

By taking the derivative multiple times, we find that , which is a Poisson distribution.

See also

Notes

  1. Here and throughout this article, "log" refers to the natural logarithm rather than a logarithm to some other base. The use of Θ here invokes big O notation.
  2. E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The approximation for this expected number gives in this case .

Related Research Articles

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a die as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Geometric distribution</span> Probability distribution

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

<span class="mw-page-title-main">Log-normal distribution</span> Probability distribution

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent and identical random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of sin, cos, tan, etc.

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958. It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection. The process can describe the probabilistic dynamics in a finite population of constant size N in which two alleles A and B are competing for dominance. The two alleles are considered to be true replicators.

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value.

References

  1. Mitzenmacher, Michael (2017). Probability and computing : randomization and probabilistic techniques in algorithms and data analysis. Eli Upfal (2nd ed.). Cambridge, United Kingdom. Theorem 5.13. ISBN   978-1-107-15488-9. OCLC   960841613.{{cite book}}: CS1 maint: location missing publisher (link)
  2. Flajolet, Philippe; Gardy, Danièle; Thimonier, Loÿs (1992), "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Discrete Applied Mathematics, 39 (3): 207–229, CiteSeerX   10.1.1.217.5965 , doi:10.1016/0166-218x(92)90177-c
  3. Kan, N. D. (2005-05-01). "Martingale approach to the coupon collection problem". Journal of Mathematical Sciences. 127 (1): 1737–1744. doi:10.1007/s10958-005-0134-y. ISSN   1573-8795.