Phase-type distribution

Last updated
Phase-type
Parameters subgenerator matrix
, probability row vector
Support
PDF
See article for details
CDF
Mean
Median no simple closed form
Mode no simple closed form
Variance
MGF
CF

A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. [1] It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases.

Contents

It has a discrete-time equivalent  the discrete phase-type distribution .

The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution.

Definition

Consider a continuous-time Markov process with m + 1 states, where m  1, such that the states 1,...,m are transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of the m + 1 phases given by the probability vector (α0,α) where α0 is a scalar and α is a 1 × m vector.

The continuous phase-type distribution is the distribution of time from the above process's starting until absorption in the absorbing state.

This process can be written in the form of a transition rate matrix,

where S is an m × m matrix and S0 = –S1. Here 1 represents an m × 1 column vector with every element being 1.

Characterization

The distribution of time X until the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S).

The distribution function of X is given by,

and the density function,

for all x> 0, where exp( · ) is the matrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by

The Laplace transform of the phase type distribution is given by

where I is the identity matrix.

Special cases

The following probability distributions are all considered special cases of a continuous phase-type distribution:

As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want.

Examples

In all the following examples it is assumed that there is no probability mass at zero, that is α0 = 0.

Exponential distribution

The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are : S = -λ and α = 1.

Hyperexponential or mixture of exponential distribution

The mixture of exponential or hyperexponential distribution with λ12,...,λn>0 can be represented as a phase type distribution with

with and

This mixture of densities of exponential distributed random variables can be characterized through

or its cumulative distribution function

with

Erlang distribution

The Erlang distribution has two parameters, the shape an integer k> 0 and the rate λ > 0. This is sometimes denoted E(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by making S a k×k matrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example, E(5,λ),

and

For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation. [2]

The hypoexponential distribution is a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case).

Mixture of Erlang distribution

The mixture of two Erlang distributions with parameter E(3,β1), E(3,β2) and (α12) (such that α1 + α2 = 1 and for each i, αi ≥ 0) can be represented as a phase type distribution with

and

Coxian distribution

The Coxian distribution is a generalisation of the Erlang distribution. Instead of only being able to enter the absorbing state from state k it can be reached from any phase. The phase-type representation is given by,

and

where 0 <p1,...,pk-1 ≤ 1. In the case where all pi = 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation.

The generalised Coxian distribution relaxes the condition that requires starting in the first phase.

Properties

Minima of Independent PH Random Variables

Similarly to the exponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this is here.

Generating samples from phase-type distributed random variables

BuTools includes methods for generating samples from phase-type distributed random variables. [3]

Approximating other distributions

Any distribution can be arbitrarily well approximated by a phase type distribution. [4] [5] In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance [2] ).

Fitting a phase type distribution to data

Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods. [8] Fitting a phase type distribution to heavy-tailed distributions has been shown to be practical in some situations. [9]

See also

Related Research Articles

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

Erlang distribution

The Erlang distribution is a two-parameter family of continuous probability distributions with support . The two parameters are:

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two different parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ.
  2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.

Dirichlet process

In probability theory, Dirichlet processes are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution.

In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process is a mathematical model for the time between job arrivals to a system. The simplest such process is a Poisson process where the time between each arrival is exponentially distributed.

In probability theory and statistics, the normal-gamma distribution is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box-Cox transformed regressors.

Normal-inverse-gamma distribution

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In queueing theory, a discipline within, the queue is a multi-server queueing model. In Kendall's notation it describes a system where arrivals form a single queue and are governed by a, there are servers, and job service times are exponentially distributed. It is a generalization of which considers only a single server. The model with infinitely many servers is the.

A geometric stable distribution or geo-stable distribution is a type of leptokurtic probability distribution. Geometric stable distributions were introduced in Klebanov, L. B., Maniya, G. M., and Melamed, I. A. (1985). A problem of Zolotarev and analogs of infinitely divisible and stable distributions in a scheme for summing a random number of random variables. These distributions are analogues for stable distributions for the case when the number of summands is random, independent of the distribution of summand, and having geometric distribution. The geometric stable distribution may be symmetric or asymmetric. A symmetric geometric stable distribution is also referred to as a Linnik distribution. The Laplace distribution and asymmetric Laplace distribution are special cases of the geometric stable distribution. The Mittag-Leffler distribution is also a special case of a geometric stable distribution.

Lomax distribution

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In probability theory and statistics, the noncentral beta distribution is a continuous probability distribution that is a noncentral generalization of the (central) beta distribution.

In queueing theory, a discipline within the mathematical theory of probability, a fluid queue is a mathematical model used to describe the fluid level in a reservoir subject to randomly determined periods of filling and emptying. The term dam theory was used in earlier literature for these models. The model has been used to approximate discrete models, model the spread of wildfires, in ruin theory and to model high speed data networks. The model applies the leaky bucket algorithm to a stochastic source.

In probability theory, the matrix-exponential distribution is an absolutely continuous distribution with rational Laplace–Stieltjes transform. They were first introduced by David Cox in 1955 as distributions with rational Laplace–Stieltjes transforms.

Kaniadakis logistic distribution Probability distribution

The Kaniadakis Logistic distribution is a generalized version of the Logistic distribution associated with the Kaniadakis statistics. It is one example of a Kaniadakis distribution. The κ-Logistic probability distribution describes the population kinetics behavior of bosonic or fermionic character.

References

  1. Harchol-Balter, M. (2012). "Real-World Workloads: High Variability and Heavy Tails". Performance Modeling and Design of Computer Systems. pp. 347–348. doi:10.1017/CBO9781139226424.026. ISBN   9781139226424.
  2. 1 2 Aldous, David; Shepp, Larry (1987). "The least variable phase type distribution is erlang" (PDF). Stochastic Models. 3 (3): 467. doi:10.1080/15326348708807067.
  3. Horváth, G. B.; Reinecke, P.; Telek, M. S.; Wolter, K. (2012). "Efficient Generation of PH-Distributed Random Variates" (PDF). Analytical and Stochastic Modeling Techniques and Applications. Lecture Notes in Computer Science. Vol. 7314. p. 271. doi:10.1007/978-3-642-30782-9_19. ISBN   978-3-642-30781-2.
  4. Bolch, Gunter; Greiner, Stefan; de Meer, Hermann; Trivedi, Kishor S. (1998). "Steady-State Solutions of Markov Chains". Queueing Networks and Markov Chains. pp. 103–151. doi:10.1002/0471200581.ch3. ISBN   0471193666.
  5. Cox, D. R. (2008). "A use of complex probabilities in the theory of stochastic processes". Mathematical Proceedings of the Cambridge Philosophical Society . 51 (2): 313–319. doi:10.1017/S0305004100030231.
  6. Osogami, T.; Harchol-Balter, M. (2006). "Closed form solutions for mapping general distributions to quasi-minimal PH distributions". Performance Evaluation. 63 (6): 524. doi:10.1016/j.peva.2005.06.002.
  7. Casale, G.; Zhang, E. Z.; Smirni, E. (2008). "KPC-Toolbox: Simple Yet Effective Trace Fitting Using Markovian Arrival Processes". 2008 Fifth International Conference on Quantitative Evaluation of Systems (PDF). p. 83. doi:10.1109/QEST.2008.33. ISBN   978-0-7695-3360-5. S2CID   252444.
  8. Lang, Andreas; Arthur, Jeffrey L. (1996). "Parameter approximation for Phase-Type distributions". In Chakravarthy, S.; Alfa, Attahiru S. (eds.). Matrix Analytic methods in Stochastic Models. CRC Press. ISBN   0824797663.
  9. Ramaswami, V.; Poole, D.; Ahn, S.; Byers, S.; Kaplan, A. (2005). "Ensuring Access to Emergency Services in the Presence of Long Internet Dial-Up Calls". Interfaces. 35 (5): 411. doi:10.1287/inte.1050.0155.
  10. Horváth, András S.; Telek, Miklós S. (2002). "PhFit: A General Phase-Type Fitting Tool". Computer Performance Evaluation: Modelling Techniques and Tools. Lecture Notes in Computer Science. Vol. 2324. p. 82. doi:10.1007/3-540-46029-2_5. ISBN   978-3-540-43539-6.
  11. Asmussen, Søren; Nerman, Olle; Olsson, Marita (1996). "Fitting Phase-Type Distributions via the EM Algorithm". Scandinavian Journal of Statistics. 23 (4): 419–441. JSTOR   4616418.
  12. Reinecke, P.; Krauß, T.; Wolter, K. (2012). "Cluster-based fitting of phase-type distributions to empirical data". Computers & Mathematics with Applications. 64 (12): 3840. doi: 10.1016/j.camwa.2012.03.016 .
  13. Pérez, J. F.; Riaño, G. N. (2006). "jPhase: an object-oriented tool for modeling phase-type distributions". Proceeding from the 2006 workshop on Tools for solving structured Markov chains (SMCtools '06) (PDF). doi:10.1145/1190366.1190370. ISBN   1595935061. S2CID   7863948.