Discrete-stable distribution

Last updated

The discrete-stable distributions [1] are a class of probability distributions with the property that the sum of several random variables from such a distribution under appropriate scaling is distributed according to the same family. They are the discrete analogue of the continuous-stable distributions .

Contents

The discrete-stable distributions have been used in numerous fields, in particular in scale-free networks such as the internet, social networks [2] or even semantic networks. [3]

Both the discrete and continuous classes of stable distribution have properties such as infinite divisibility, power law tails and unimodality.

The most well-known discrete stable distribution is the Poisson distribution which is a special case. [4] It is the only discrete-stable distribution for which the mean and all higher-order moments are finite.[ dubious ]

Definition

The discrete-stable distributions are defined [5] through their probability-generating function

In the above, is a scale parameter and describes the power-law behaviour such that when ,

When the distribution becomes the familiar Poisson distribution with mean .

The characteristic function of a discrete-stable distribution has the form: [6]

, with and .

Again, when the distribution becomes the Poisson distribution with mean .

The original distribution is recovered through repeated differentiation of the generating function:

A closed-form expression using elementary functions for the probability distribution of the discrete-stable distributions is not known except for in the Poisson case, in which

Expressions do exist, however, using special functions for the case [7] (in terms of Bessel functions) and [8] (in terms of hypergeometric functions).

As compound probability distributions

The entire class of discrete-stable distributions can be formed as Poisson compound probability distributions where the mean, , of a Poisson distribution is defined as a random variable with a probability density function (PDF). When the PDF of the mean is a one-sided continuous-stable distribution with stability parameter and scale parameter the resultant distribution is [9] discrete-stable with index and scale parameter .

Formally, this is written:

where is the pdf of a one-sided continuous-stable distribution with symmetry paramètre and location parameter .

A more general result [8] states that forming a compound distribution from any discrete-stable distribution with index with a one-sided continuous-stable distribution with index results in a discrete-stable distribution with index , reducing the power-law index of the original distribution by a factor of .

In other words,

In the Poisson limit

In the limit , the discrete-stable distributions behave [9] like a Poisson distribution with mean for small , however for , the power-law tail dominates.

The convergence of i.i.d. random variates with power-law tails to a discrete-stable distribution is extraordinarily slow [10] when - the limit being the Poisson distribution when and when .

See also

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

<span class="mw-page-title-main">Pearson distribution</span> Family of continuous probability distributions

The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

<span class="mw-page-title-main">Generalized inverse Gaussian distribution</span>

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases.

Also known as the (Moran-)Gamma Process, the gamma process is a random process studied in mathematics, statistics, probability theory, and stochastics. The gamma process is a stochastic or random process consisting of independently distributed gamma distributions where represents the number of event occurrences from time 0 to time . The gamma distribution has scale parameter and shape parameter , often written as . Both and must be greater than 0. The gamma process is often written as where represents the time from 0. The process is a pure-jump increasing Lévy process with intensity measure for all positive . Thus jumps whose size lies in the interval occur as a Poisson process with intensity The parameter controls the rate of jump arrivals and the scaling parameter inversely controls the jump size. It is assumed that the process starts from a value 0 at t = 0 meaning

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume. It plays an important role for discrete-stable distributions.

<span class="mw-page-title-main">Lomax distribution</span>

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In probability theory and statistics, the noncentral beta distribution is a continuous probability distribution that is a noncentral generalization of the (central) beta distribution.

In statistics, a zero-inflated model is a statistical model based on a zero-inflated probability distribution, i.e. a distribution that allows for frequent zero-valued observations.

The Mittag-Leffler distributions are two families of probability distributions on the half-line . They are parametrized by a real or . Both are defined with the Mittag-Leffler function, named after Gösta Mittag-Leffler.

In probability theory and statistics, the Conway–Maxwell–binomial (CMB) distribution is a three parameter discrete probability distribution that generalises the binomial distribution in an analogous manner to the way that the Conway–Maxwell–Poisson distribution generalises the Poisson distribution. The CMB distribution can be used to model both positive and negative association among the Bernoulli summands,.

<span class="mw-page-title-main">Stable count distribution</span> Probability distribution

In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn in his 2017 study of daily distributions of the S&P 500 and the VIX. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

References

  1. Steutel, F. W.; van Harn, K. (1979). "Discrete Analogues of Self-Decomposability and Stability" (PDF). Annals of Probability. 7 (5): 893–899. doi: 10.1214/aop/1176994950 .
  2. Barabási, Albert-László (2003). Linked: how everything is connected to everything else and what it means for business, science, and everyday life. New York, NY: Plum.
  3. Steyvers, M.; Tenenbaum, J. B. (2005). "The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth". Cognitive Science. 29 (1): 41–78. arXiv: cond-mat/0110012 . doi:10.1207/s15516709cog2901_3. PMID   21702767. S2CID   6000627.
  4. Renshaw, Eric (2015-03-19). Stochastic Population Processes: Analysis, Approximations, Simulations. OUP Oxford. ISBN   978-0-19-106039-7.
  5. Hopcraft, K. I.; Jakeman, E.; Matthews, J. O. (2002). "Generation and monitoring of a discrete stable random process". Journal of Physics A. 35 (49): L745–752. Bibcode:2002JPhA...35L.745H. doi:10.1088/0305-4470/35/49/101.
  6. Slamova, Lenka; Klebanov, Lev. "Modeling financial returns by discrete stable distributions" (PDF). International Conference Mathematical Methods in Economics. Retrieved 2023-07-07.
  7. Matthews, J. O.; Hopcraft, K. I.; Jakeman, E. (2003). "Generation and monitoring of discrete stable random processes using multiple immigration population models". Journal of Physics A. 36 (46): 11585–11603. Bibcode:2003JPhA...3611585M. doi:10.1088/0305-4470/36/46/004.
  8. 1 2 Lee, W.H. (2010). Continuous and discrete properties of stochastic processes (PhD thesis). The University of Nottingham.
  9. 1 2 Lee, W. H.; Hopcraft, K. I.; Jakeman, E. (2008). "Continuous and discrete stable processes". Physical Review E. 77 (1): 011109–1 to 011109–04. Bibcode:2008PhRvE..77a1109L. doi:10.1103/PhysRevE.77.011109. PMID   18351820.
  10. Hopcraft, K. I.; Jakeman, E.; Matthews, J. O. (2004). "Discrete scale-free distributions and associated limit theorems". Journal of Physics A. 37 (48): L635–L642. Bibcode:2004JPhA...37L.635H. doi:10.1088/0305-4470/37/48/L01.

Further reading