Generalised hyperbolic distribution

Last updated
Generalised hyperbolic
Parameters (real)
(real)
asymmetry parameter (real)
scale parameter (real)
location (real)
Support
PDF
Mean
Variance
MGF

The generalised hyperbolic distribution (GH) is a continuous probability distribution defined as the normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian distribution (GIG). Its probability density function (see the box) is given in terms of modified Bessel function of the second kind, denoted by . [1] It was introduced by Ole Barndorff-Nielsen, who studied it in the context of physics of wind-blown sand. [2]

Contents

Properties

Linear transformation

This class is closed under affine transformations. [1]

Summation

Barndorff-Nielsen and Halgreen proved that the GIG distribution is infinitely divisible and since the GH distribution can be obtained as a normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian distribution, Barndorff-Nielsen and Halgreen showed the GH distribution is infinitely divisible as well. [3]

Fails to be convolution-closed

An important point about infinitely divisible distributions is their connection to Lévy processes, i.e. at any point in time a Lévy process is infinitely divisibly distributed. Many families of well-known infinitely divisible distributions are so-called convolution-closed, i.e. if the distribution of a Lévy process at one point in time belongs to one of these families, then the distribution of the Lévy process at all points in time belong to the same family of distributions. For example, a Poisson process will be Poisson-distributed at all points in time, or a Brownian motion will be normally distributed at all points in time. However, a Lévy process that is generalised hyperbolic at one point in time might fail to be generalized hyperbolic at another point in time. In fact, the generalized Laplace distributions and the normal inverse Gaussian distributions are the only subclasses of the generalized hyperbolic distributions that are closed under convolution. [4]

As the name suggests it is of a very general form, being the superclass of, among others, the Student's t-distribution, the Laplace distribution, the hyperbolic distribution, the normal-inverse Gaussian distribution and the variance-gamma distribution.

Applications

It is mainly applied to areas that require sufficient probability of far-field behaviour[ clarification needed ], which it can model due to its semi-heavy tailsa property the normal distribution does not possess. The generalised hyperbolic distribution is often used in economics, with particular application in the fields of modelling financial markets and risk management, due to its semi-heavy tails.

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Student's t-distribution</span> Probability distribution

In probability and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

<span class="mw-page-title-main">Laplace distribution</span> Probability distribution

In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

<span class="mw-page-title-main">Generalized inverse Gaussian distribution</span> Family of continuous probability distributions

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

<span class="mw-page-title-main">Inverse Gaussian distribution</span> Family of continuous probability distributions

In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).

<span class="mw-page-title-main">Ole Barndorff-Nielsen</span> Danish statistician (1935–2022)

Ole Eiler Barndorff-Nielsen was a Danish statistician who has contributed to many areas of statistical science.

In probability theory and statistics, a normal variance-mean mixture with mixing probability density is the continuous probability distribution of a random variable of the form

The normal-inverse Gaussian distribution is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixing density is the inverse Gaussian distribution. The NIG distribution was noted by Blaesild in 1977 as a subclass of the generalised hyperbolic distribution discovered by Ole Barndorff-Nielsen. In the next year Barndorff-Nielsen published the NIG in another paper. It was introduced in the mathematical finance literature in 1997.

The variance-gamma distribution, generalized Laplace distribution or Bessel function distribution is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixing density is the gamma distribution. The tails of the distribution decrease more slowly than the normal distribution. It is therefore suitable to model phenomena where numerically large values are more probable than is the case for the normal distribution. Examples are returns from financial assets and turbulent wind speeds. The distribution was introduced in the financial literature by Madan and Seneta. The variance-gamma distributions form a subclass of the generalised hyperbolic distributions.

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).

Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.

The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "symmetric" and "asymmetric"; however, this is not a standard nomenclature.

In probability theory and statistics, the normal-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and precision matrix.

In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.

An additive process, in probability theory, is a cadlag, continuous in probability stochastic process with independent increments. An additive process is the generalization of a Lévy process. An example of an additive process that is not a Lévy process is a Brownian motion with a time-dependent drift. The additive process was introduced by Paul Lévy in 1937.

References

  1. 1 2 Barndorff-Nielsen, Ole E.; Mikosch, Thomas; Resnick, Sidney I. (2001). Lévy Processes: Theory and Applications. Birkhäuser. ISBN   0-8176-4167-X.
  2. Barndorff-Nielsen, Ole (1977). "Exponentially decreasing distributions for the logarithm of particle size". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 353 (1674). The Royal Society: 401–409. Bibcode:1977RSPSA.353..401B. doi:10.1098/rspa.1977.0041. JSTOR   79167.
  3. Barndorff-Nielsen, O.; Halgreen, Christian (1977). "Infinite Divisibility of the Hyperbolic and Generalized Inverse Gaussian Distributions". Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete. 38: 309–311. doi:10.1007/BF00533162.
  4. Podgórski, Krzysztof; Wallin, Jonas (9 February 2015). "Convolution-invariant subclasses of generalized hyperbolic distributions". Communications in Statistics – Theory and Methods. 45 (1): 98–103. doi:10.1080/03610926.2013.821489.