Markov renewal process

Last updated

A Markov renewal process (MRP) is a random process in probability and statistics, which generalizes the concept of Markov jump processes. Other random processes, such as Markov chains, Poisson, and renewal processes, can be derived as special cases of Markov renewal processes (MRPs).

Contents

Definition

An illustration of a Markov renewal process Marked point process.png
An illustration of a Markov renewal process

In the context of a jump process that takes states in a state space, We consider the set of random variables, which represents the jump times and We consider the set of random variables, where represents the jump times and represents the associated states in the sequence of states (see Figure). Let the inter-arrival time . In order for the sequence to be considered a Markov renewal process the following condition should hold:

Relation to other stochastic processes

  1. Let and be as defined in the previous statement. Define a new stochastic process for , then the process is called a semi-Markov process as it happens in a continuous time Markov chain/process (CTMC). The process is Markovian only at the specified jump instants, justifying the name semi-Markov. [1] [2] [3] (See also: hidden semi-Markov model.)
  2. A semi-Markov process (defined in the above bullet point) in which all the holding times are exponentially distributed is called a continuous-time Markov chain. In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a continuous-time Markov chain.
  3. The sequence in the Markov renewal process is a discrete-time Markov chain . In other words, if the time variables are ignored in the Markov renewal process equation, we end up with a discrete time Markov chain.
  4. If the sequence of s is independent and identically distributed, and if their distribution does not depend on the state , then the process is renewal. So, if the states are ignored and we have a chain of iid times, then we have a renewal process.

See also

Related Research Articles

<span class="mw-page-title-main">Independence (probability theory)</span> When the occurrence of one event does not affect the likelihood of another

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

<span class="mw-page-title-main">Martingale (probability theory)</span> Model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.

<span class="mw-page-title-main">Markov property</span> Memoryless property of a stochastic process

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

<span class="mw-page-title-main">Stopping time</span> Time at which a random variable stops exhibiting a behavior of interest

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

In the branch of mathematics called homological algebra, a t-structure is a way to axiomatize the properties of an abelian subcategory of a derived category. A t-structure on consists of two subcategories of a triangulated category or stable infinity category which abstract the idea of complexes whose cohomology vanishes in positive, respectively negative, degrees. There can be many distinct t-structures on the same category, and the interplay between these structures has implications for algebra and geometry. The notion of a t-structure arose in the work of Beilinson, Bernstein, Deligne, and Gabber on perverse sheaves.

In probability theory, the central limit theorem says that, under certain conditions, the sum of many independent identically-distributed random variables, when scaled appropriately, converges in distribution to a standard normal distribution. The martingale central limit theorem generalizes this result for random variables to martingales, which are stochastic processes where the change in the value of the process from time t to time t + 1 has expectation zero, even conditioned on previous outcomes.

In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.

In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori N. Milstein who first published it in 1974.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth statistic of an Itō diffusion at a stopping time. It may be seen as a stochastic generalization of the (second) fundamental theorem of calculus. It is named after the Russian mathematician Eugene Dynkin.

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In mathematics, some boundary value problems can be solved using the methods of stochastic analysis. Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion. However, it turns out that for a large class of semi-elliptic second-order partial differential equations the associated Dirichlet boundary value problem can be solved using an Itō process that solves an associated stochastic differential equation.

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general state spaces.

<span class="mw-page-title-main">Reflection principle (Wiener process)</span>

In the theory of probability for stochastic processes, the reflection principle for a Wiener process states that if the path of a Wiener process f(t) reaches a value f(s) = a at time t = s, then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a. More formally, the reflection principle refers to a lemma concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t. It is a corollary of the strong Markov property of Brownian motion.

In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.

In computer science, optimal computing budget allocation (OCBA) is an approach to maximize the overall simulation efficiency for finding an optimal decision. It was introduced in the mid-1990s by Dr. Chun-Hung Chen.

In the mathematical field of group theory, an Artin transfer is a certain homomorphism from an arbitrary finite or infinite group to the commutator quotient group of a subgroup of finite index. Originally, such mappings arose as group theoretic counterparts of class extension homomorphisms of abelian extensions of algebraic number fields by applying Artin's reciprocity maps to ideal class groups and analyzing the resulting homomorphisms between quotients of Galois groups. However, independently of number theoretic applications, a partial order on the kernels and targets of Artin transfers has recently turned out to be compatible with parent-descendant relations between finite p-groups, which can be visualized in descendant trees. Therefore, Artin transfers provide a valuable tool for the classification of finite p-groups and for searching and identifying particular groups in descendant trees by looking for patterns defined by the kernels and targets of Artin transfers. These strategies of pattern recognition are useful in purely group theoretic context, as well as for applications in algebraic number theory concerning Galois groups of higher p-class fields and Hilbert p-class field towers.

Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983, as a universal tool to data compression, but recently have been used to model data in different areas such as biology, linguistics and music.

References

  1. Medhi, J. (1982). Stochastic processes. New York: Wiley & Sons. ISBN   978-0-470-27000-4.
  2. Ross, Sheldon M. (1999). Stochastic processes (2nd ed.). New York [u.a.]: Routledge. ISBN   978-0-471-12062-9.
  3. Barbu, Vlad Stefan; Limnios, Nikolaos (2008). Semi-Markov chains and hidden semi-Markov models toward applications: their use in reliability and DNA analysis. New York: Springer. ISBN   978-0-387-73171-1.