2-EPT probability density function

Last updated
2-EPT Density Function
Parameters

Contents

Support
PDF
CDF
Mean
CF

In probability theory, a 2-EPT probability density function is a class of probability density functions on the real line. The class contains the density functions of all distributions that have characteristic functions that are strictly proper rational functions (i.e., the degree of the numerator is strictly less than the degree of the denominator).

Definition

A 2-EPT probability density function is a probability density function on with a strictly proper rational characteristic function. On either or these probability density functions are exponential-polynomial-trigonometric (EPT) functions.

Any EPT density function on can be represented as

where e represents a matrix exponential, are square matrices, are column vectors and are row vectors. Similarly the EPT density function on is expressed as

The parameterization is the minimal realization [1] of the 2-EPT function.

The general class of probability measures on with (proper) rational characteristic functions are densities corresponding to mixtures of the pointmass at zero ("delta distribution") and 2-EPT densities. Unlike phase-type and matrix geometric [2] distributions, the 2-EPT probability density functions are defined on the whole real line. It has been shown that the class of 2-EPT densities is closed under many operations and using minimal realizations these calculations have been illustrated for the two-sided framework in Sexton and Hanzon. [3] The most involved operation is the convolution of 2-EPT densities using state space techniques. Much of the work centers on the ability to decompose the rational characteristic function into the sum of two rational functions with poles located in either the open left or open right half plane. The variance-gamma distribution density has been shown to be a 2-EPT density under a parameter restriction. [4]

Notes

  1. Kailath, T. (1980) Linear Systems, Prentice Hall, 1980
  2. Neuts, M. "Probability Distributions of Phase Type", Liber Amicorum Prof. Emeritus H. Florin pages 173-206, Department of Mathematics, University of Louvain, Belgium 1975
  3. Sexton, C. and Hanzon,B.,"State Space Calculations for two-sided EPT Densities with Financial Modelling Applications", www.2-ept.com
  4. Madan, D., Carr, P., Chang, E. (1998) "The Variance Gamma Process and Option Pricing", European Finance Review 2: 79–105

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Exponential function</span> Mathematical function, denoted exp(x) or e^x

The exponential function is a mathematical function denoted by or . Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the operation of taking powers of a number, but various modern definitions allow it to be rigorously extended to all real arguments , including irrational numbers. Its ubiquitous occurrence in pure and applied mathematics led mathematician Walter Rudin to consider the exponential function to be "the most important function in mathematics".

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

In mathematics, a transcendental number is a real or complex number that is not algebraic – that is, not the root of a non-zero polynomial of finite degree with rational coefficients. The best-known transcendental numbers are π and e. The quality of a number being transcendental is called transcendence.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Probability density function</span> Concept in mathematics

In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

<span class="mw-page-title-main">Moment problem</span> Trying to map moments to a measure that generates them

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

<span class="mw-page-title-main">Scoring rule</span> Measure for evaluating probabilistic forecasts

In decision theory, a scoring rule provides are evaluation metrics for probabilistic predictions or forecasts. While "regular" loss functions assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, a scoring function provides a summary measure for the evaluation of point predictions, i.e. one predicts a property or functional , like the expectation or the median.

<span class="mw-page-title-main">Quantile function</span> Statistical function that defines the quantiles of a probability distribution

In probability and statistics, the quantile function outputs the value of a random variable such that its probability is less than or equal to an input probability value. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function, percent-point function, inverse cumulative distribution function or inverse distribution function.

In probability theory, the matrix-exponential distribution is an absolutely continuous distribution with rational Laplace–Stieltjes transform. They were first introduced by David Cox in 1955 as distributions with rational Laplace–Stieltjes transforms.