Delaporte distribution

Last updated
Delaporte
Probability mass function
DelaportePMF.svg
When and are 0, the distribution is the Poisson.
When is 0, the distribution is the negative binomial.
Cumulative distribution function
DelaporteCDF.svg
When and are 0, the distribution is the Poisson.
When is 0, the distribution is the negative binomial.
Parameters

(fixed mean)

(parameters of variable mean)
Support
PMF
CDF
Mean
Mode
Variance
Skewness See #Properties
Ex. kurtosis See #Properties
MGF

The Delaporte distribution is a discrete probability distribution that has received attention in actuarial science. [1] [2] It can be defined using the convolution of a negative binomial distribution with a Poisson distribution. [2] Just as the negative binomial distribution can be viewed as a Poisson distribution where the mean parameter is itself a random variable with a gamma distribution, the Delaporte distribution can be viewed as a compound distribution based on a Poisson distribution, where there are two components to the mean parameter: a fixed component, which has the parameter, and a gamma-distributed variable component, which has the and parameters. [3] The distribution is named for Pierre Delaporte, who analyzed it in relation to automobile accident claim counts in 1959, [4] although it appeared in a different form as early as 1934 in a paper by Rolf von Lüders, [5] where it was called the Formel II distribution. [2]

Contents

Properties

The skewness of the Delaporte distribution is:

The excess kurtosis of the distribution is:

Related Research Articles

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In Bayesian probability theory, if the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function .

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter, to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.

<span class="mw-page-title-main">Generalized inverse Gaussian distribution</span>

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

<span class="mw-page-title-main">Beta-binomial distribution</span> Discrete probability distribution

In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In probability theory and statistics, the normal-gamma distribution is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.

In probability and statistics, the Hellinger distance is used to quantify the similarity between two probability distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume.

In probability theory, a beta negative binomial distribution is the probability distribution of a discrete random variable  equal to the number of failures needed to get successes in a sequence of independent Bernoulli trials. The probability of success on each trial stays constant within any given experiment but varies across different experiments following a beta distribution. Thus the distribution is a compound probability distribution.

<span class="mw-page-title-main">Lomax distribution</span>

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In probability theory and statistics, the noncentral beta distribution is a continuous probability distribution that is a noncentral generalization of the (central) beta distribution.

In statistics, a zero-inflated model is a statistical model based on a zero-inflated probability distribution, i.e. a distribution that allows for frequent zero-valued observations.

References

  1. Panjer, Harry H. (2006). "Discrete Parametric Distributions". In Teugels, Jozef L.; Sundt, Bjørn (eds.). Encyclopedia of Actuarial Science. John Wiley & Sons. doi:10.1002/9780470012505.tad027. ISBN   978-0-470-01250-5.
  2. 1 2 3 Johnson, Norman Lloyd; Kemp, Adrienne W.; Kotz, Samuel (2005). Univariate discrete distributions (Third ed.). John Wiley & Sons. pp. 241–242. ISBN   978-0-471-27246-5.
  3. Vose, David (2008). Risk analysis: a quantitative guide (Third, illustrated ed.). John Wiley & Sons. pp. 618–619. ISBN   978-0-470-51284-5. LCCN   2007041696.
  4. Delaporte, Pierre J. (1960). "Quelques problèmes de statistiques mathématiques poses par l'Assurance Automobile et le Bonus pour non sinistre" [Some problems of mathematical statistics as related to automobile insurance and no-claims bonus]. Bulletin Trimestriel de l'Institut des Actuaires Français (in French). 227: 87–102.
  5. von Lüders, Rolf (1934). "Die Statistik der seltenen Ereignisse" [The statistics of rare events]. Biometrika (in German). 26 (1–2): 108–128. doi:10.1093/biomet/26.1-2.108. JSTOR   2332055.

Further reading