Geometric distribution

Last updated
Geometric
Probability mass function
Geometric pmf.svg
Cumulative distribution function
Geometric cdf.svg
Parameters success probability (real) success probability (real)
Support k trials where k failures where
PMF
CDF for ,
for
for ,
for
Mean
Median


(not unique if

Contents

is an integer)


(not unique if is an integer)
Mode
Variance
Skewness
Excess kurtosis
Entropy
MGF
for

for
CF
PGF
Fisher information

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of ); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

The geometric distribution gives the probability that the first occurrence of success requires independent trials, each with success probability . If the probability of success on each trial is , then the probability that the -th trial is the first success is

for

The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:

for

The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry. [1] :210

Definition

The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. Its probability mass function depends on its parameterization and support. When supported on , the probability mass function is where is the number of trials and is the probability of success in each trial. [2] :260–261

The support may also be , defining . This alters the probability mass function into where is the number of failures before the first success. [3] :66

An alternative parameterization of the distribution gives the probability mass function where and . [1] :208–209

An example of a geometric distribution arises from rolling a six-sided dieuntil a "1" appears. Each roll is independent with a chance of success. The number of rolls needed follows a geometric distribution with .

Properties

Memorylessness

The geometric distribution is the only memoryless discrete probability distribution. [4] It is the discrete version of the same property found in the exponential distribution. [1] :228 The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success.

Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables. [5] Expressed in terms of conditional probability, the two definitions are

and

where and are natural numbers, is a geometrically distributed random variable defined over , and is a geometrically distributed random variable defined over . Note that these definitions are not equivalent for discrete random variables; does not satisfy the first equation and does not satisfy the second.

Moments and cumulants

The expected value and variance of a geometrically distributed random variable defined over is [2] :261 With a geometrically distributed random variable defined over , the expected value changes intowhile the variance stays the same. [6] :114–115

For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is and the average number of failures is .

The moment generating function of the geometric distribution when defined over and respectively is [7] [6] :114The moments for the number of failures before the first success are given by

where is the polylogarithm function. [8]

The cumulant generating function of the geometric distribution defined over is [1] :216The cumulants satisfy the recursionwhere , when defined over . [1] :216

Proof of expected value

Consider the expected value of X as above, i.e. the average number of trials until a success. The first trial either succeeds with probability , or fails with probability . If it fails, the remaining mean number of trials until a success is identical to the original mean - this follows from the fact that all trials are independent.

From this we get the formula:

which, when solved for , gives:

The expected number of failures can be found from the linearity of expectation, . It can also be shown in the following way:

The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.

Summary statistics

The mean of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, or when defined over or respectively.

The median of the geometric distribution is when defined over [9] and when defined over . [3] :69

The mode of the geometric distribution is the first value in the support set. This is 1 when defined over and 0 when defined over . [3] :69

The skewness of the geometric distribution is . [6] :115

The kurtosis of the geometric distribution is . [6] :115 The excess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of a normal distribution, . [10] :217 Therefore, the excess kurtosis of the geometric distribution is . Since , the excess kurtosis is always positive so the distribution is leptokurtic. [3] :69 In other words, the tail of a geometric distribution decays faster than a Gaussian. [10] :217

Entropy and Fisher's Information

Entropy (Geometric Distribution, Failures Before Success)

Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:

The entropy for this distribution is defined as:

The entropy increases as the probability decreases, reflecting greater uncertainty as success becomes rarer.

Fisher's Information (Geometric Distribution, Failures Before Success)

Fisher information measures the amount of information that an observable random variable carries about an unknown parameter . For the geometric distribution (failures before the first success), the Fisher information with respect to is given by:

Proof:

Fisher information increases as decreases, indicating that rarer successes provide more information about the parameter .

Entropy (Geometric Distribution, Trials Until Success)

For the geometric distribution modeling the number of trials until the first success, the probability mass function is:

The entropy for this distribution is the same as that of version modeling trials until failure,

Fisher's Information (Geometric Distribution, Trials Until Success)

Fisher information for the geometric distribution modeling the number of trials until the first success is given by:

Proof:

General properties

where q = 1  p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.
has a geometric distribution taking values in , with expected value r/(1  r).[ citation needed ]

Statistical inference

The true parameter of an unknown geometric distribution can be inferred through estimators and conjugate distributions.

Method of moments

Provided they exist, the first moments of a probability distribution can be estimated from a sample using the formulawhere is the th sample moment and . [16] :349–350 Estimating with gives the sample mean, denoted . Substituting this estimate in the formula for the expected value of a geometric distribution and solving for gives the estimators and when supported on and respectively. These estimators are biased since as a result of Jensen's inequality. [17] :53–54

Maximum likelihood estimation

The maximum likelihood estimator of is the value that maximizes the likelihood function given a sample. [16] :308 By finding the zero of the derivative of the log-likelihood function when the distribution is defined over , the maximum likelihood estimator can be found to be , where is the sample mean. [18] If the domain is , then the estimator shifts to . As previously discussed in § Method of moments, these estimators are biased.

Regardless of the domain, the bias is equal to

which yields the bias-corrected maximum likelihood estimator,[ citation needed ]

Bayesian inference

In Bayesian inference, the parameter is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples. [17] :167 If a beta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a prior is selected, then the posterior, after observing samples , is [19] Alternatively, if the samples are in , the posterior distribution is [20] Since the expected value of a distribution is , [11] :145 as and approach zero, the posterior mean approaches its maximum likelihood estimate.

Random variate generation

The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to . However, the number of random variables needed is also geometrically distributed and the algorithm slows as decreases. [21] :498

Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable can become geometrically distributed with parameter through . In turn, can be generated from a standard uniform random variable altering the formula into . [21] :499–500 [22]

Applications

The geometric distribution is used in many disciplines. In queueing theory, the M/M/1 queue has a steady state following a geometric distribution. [23] In stochastic processes, the Yule Furry process is geometrically distributed. [24] The distribution also arises when modeling the lifetime of a device in discrete contexts. [25] It has also been used to fit data including modeling patients spreading COVID-19. [26]

See also

References

  1. 1 2 3 4 5 6 Johnson, Norman L.; Kemp, Adrienne W.; Kotz, Samuel (2005-08-19). Univariate Discrete Distributions. Wiley Series in Probability and Statistics (1 ed.). Wiley. doi:10.1002/0471715816. ISBN   978-0-471-27246-5.
  2. 1 2 Nagel, Werner; Steyer, Rolf (2017-04-04). Probability and Conditional Expectation: Fundamentals for the Empirical Sciences. Wiley Series in Probability and Statistics (1st ed.). Wiley. doi:10.1002/9781119243496. ISBN   978-1-119-24352-6.
  3. 1 2 3 4 5 Chattamvelli, Rajan; Shanmugam, Ramalingam (2020). Discrete Distributions in Engineering and the Applied Sciences. Synthesis Lectures on Mathematics & Statistics. Cham: Springer International Publishing. doi:10.1007/978-3-031-02425-2. ISBN   978-3-031-01297-6.
  4. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. London: Springer London. p. 50. doi:10.1007/1-84628-168-7. ISBN   978-1-85233-896-1.
  5. Weisstein, Eric W. "Memoryless". mathworld.wolfram.com. Retrieved 2024-07-25.
  6. 1 2 3 4 5 Forbes, Catherine; Evans, Merran; Hastings, Nicholas; Peacock, Brian (2010-11-29). Statistical Distributions (1st ed.). Wiley. doi:10.1002/9780470627242. ISBN   978-0-470-39063-4.
  7. Bertsekas, Dimitri P.; Tsitsiklis, John N. (2008). Introduction to probability. Optimization and computation series (2nd ed.). Belmont: Athena Scientific. p. 235. ISBN   978-1-886529-23-6.
  8. Weisstein, Eric W. "Geometric Distribution". MathWorld . Retrieved 2024-07-13.
  9. Aggarwal, Charu C. (2024). Probability and Statistics for Machine Learning: A Textbook. Cham: Springer Nature Switzerland. p. 138. doi:10.1007/978-3-031-53282-5. ISBN   978-3-031-53281-8.
  10. 1 2 Chan, Stanley (2021). Introduction to Probability for Data Science (1st ed.). Michigan Publishing. ISBN   978-1-60785-747-1.
  11. 1 2 3 4 Lovric, Miodrag, ed. (2011). International Encyclopedia of Statistical Science (1st ed.). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-04898-2. ISBN   978-3-642-04897-5.
  12. 1 2 Gallager, R.; van Voorhis, D. (March 1975). "Optimal source codes for geometrically distributed integer alphabets (Corresp.)". IEEE Transactions on Information Theory. 21 (2): 228–230. doi:10.1109/TIT.1975.1055357. ISSN   0018-9448.
  13. Lisman, J. H. C.; Zuylen, M. C. A. van (March 1972). "Note on the generation of most probable frequency distributions" . Statistica Neerlandica . 26 (1): 19–23. doi:10.1111/j.1467-9574.1972.tb00152.x. ISSN   0039-0402.
  14. Pitman, Jim (1993). Probability. New York, NY: Springer New York. p. 372. doi:10.1007/978-1-4612-4374-8. ISBN   978-0-387-94594-1.
  15. Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David (1 June 1995). "On the minimum of independent geometrically distributed random variables" . Statistics & Probability Letters. 23 (4): 313–326. doi:10.1016/0167-7152(94)00130-Z. hdl: 2060/19940028569 . S2CID   1505801.
  16. 1 2 Evans, Michael; Rosenthal, Jeffrey (2023). Probability and Statistics: The Science of Uncertainty (2nd ed.). Macmillan Learning. ISBN   978-1429224628.
  17. 1 2 Held, Leonhard; Sabanés Bové, Daniel (2020). Likelihood and Bayesian Inference: With Applications in Biology and Medicine. Statistics for Biology and Health. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-60792-3. ISBN   978-3-662-60791-6.
  18. Siegrist, Kyle (2020-05-05). "7.3: Maximum Likelihood". Statistics LibreTexts. Retrieved 2024-06-20.
  19. Fink, Daniel. "A Compendium of Conjugate Priors". CiteSeerX   10.1.1.157.5540 .
  20. "3. Conjugate families of distributions" (PDF). Archived (PDF) from the original on 2010-04-08.
  21. 1 2 Devroye, Luc (1986). Non-Uniform Random Variate Generation. New York, NY: Springer New York. doi:10.1007/978-1-4613-8643-8. ISBN   978-1-4613-8645-2.
  22. Knuth, Donald Ervin (1997). The Art of Computer Programming. Vol. 2 (3rd ed.). Reading, Mass: Addison-Wesley. p. 136. ISBN   978-0-201-89683-1.
  23. Daskin, Mark S. (2021). Bite-Sized Operations Management. Synthesis Lectures on Operations Research and Applications. Cham: Springer International Publishing. p. 127. doi:10.1007/978-3-031-02493-1. ISBN   978-3-031-01365-2.
  24. Madhira, Sivaprasad; Deshmukh, Shailaja (2023). Introduction to Stochastic Processes Using R. Singapore: Springer Nature Singapore. p. 449. doi:10.1007/978-981-99-5601-2. ISBN   978-981-99-5600-5.
  25. Gupta, Rakesh; Gupta, Shubham; Ali, Irfan (2023), Garg, Harish (ed.), "Some Discrete Parametric Markov–Chain System Models to Analyze Reliability" , Advances in Reliability, Failure and Risk Analysis, Singapore: Springer Nature Singapore, pp. 305–306, doi:10.1007/978-981-19-9909-3_14, ISBN   978-981-19-9908-6 , retrieved 2024-07-13
  26. Polymenis, Athanase (2021-10-01). "An application of the geometric distribution for assessing the risk of infection with SARS-CoV-2 by location". Asian Journal of Medical Sciences. 12 (10): 8–11. doi: 10.3126/ajms.v12i10.38783 . ISSN   2091-0576.