In probability theory, a balance equation is an equation that describes the probability flux associated with a Markov chain in and out of states or set of states. [1]
The global balance equations (also known as full balance equations [2] ) are a set of equations that characterize the equilibrium distribution (or any stationary distribution) of a Markov chain, when such a distribution exists.
For a continuous time Markov chain with state space , transition rate from state to given by and equilibrium distribution given by , the global balance equations are given by [3]
or equivalently
for all . Here represents the probability flux from state to state . So the left-hand side represents the total flow from out of state i into states other than i, while the right-hand side represents the total flow out of all states into state . In general it is computationally intractable to solve this system of equations for most queueing models. [4]
For a continuous time Markov chain (CTMC) with transition rate matrix , if can be found such that for every pair of states and
holds, then by summing over , the global balance equations are satisfied and is the stationary distribution of the process. [5] If such a solution can be found the resulting equations are usually much easier than directly solving the global balance equations. [4]
A CTMC is reversible if and only if the detailed balance conditions are satisfied for every pair of states and .
A discrete time Markov chain (DTMC) with transition matrix and equilibrium distribution is said to be in detailed balance if for all pairs and , [6]
When a solution can be found, as in the case of a CTMC, the computation is usually much quicker than directly solving the global balance equations.
In some situations, terms on either side of the global balance equations cancel. The global balance equations can then be partitioned to give a set of local balance equations (also known as partial balance equations, [2] independent balance equations [7] or individual balance equations [8] ). [1] These balance equations were first considered by Peter Whittle. [8] [9] The resulting equations are somewhere between detailed balance and global balance equations. Any solution to the local balance equations is always a solution to the global balance equations (we can recover the global balance equations by summing the relevant local balance equations), but the converse is not always true. [2] Often, constructing local balance equations is equivalent to removing the outer summations in the global balance equations for certain terms. [1]
During the 1980s it was thought local balance was a requirement for a product-form equilibrium distribution, [10] [11] but Gelenbe's G-network model showed this not to be the case. [12]
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.
In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’
In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current variable, and not any variables in the past. For instance, a machine may have two states, A and E. When it is in state A, there is a 40% chance of it moving to state E and a 60% chance of it remaining in state A. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in E. The sequence of states of the machine is a Markov chain. If we denote the chain by then is the state which the machine starts in and is the random variable describing its state after 10 transitions. The process continues forever, indexed by the natural numbers.
A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.
In queueing theory, a discipline within the mathematical theory of probability, a G-network is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:
In graph theory the conductance of a graph G = (V, E) measures how "well-knit" the graph is: it controls how fast a random walk on G converges to its stationary distribution. The conductance of a graph is often called the Cheeger constant of a graph as the analog of its counterpart in spectral geometry. Since electrical networks are intimately related to random walks with a long history in the usage of the term "conductance", this alternative name helps avoid possible confusion.
In queueing theory, a discipline within the mathematical theory of probability, a BCMP network is a class of queueing network for which a product-form equilibrium distribution exists. It is named after the authors of the paper where the network was first described: Baskett, Chandy, Muntz, and Palacios. The theorem is a significant extension to a Jackson network allowing virtually arbitrary customer routing and service time distributions, subject to particular service disciplines.
In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem giving a necessary and sufficient condition for a Markov chain or continuous-time Markov chain to be stochastically identical to its time-reversed version.
In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components. Using capital Pi notation a product-form solution has algebraic form
In queueing theory, a discipline within the mathematical theory of probability, the Gordon–Newell theorem is an extension of Jackson's theorem from open queueing networks to closed queueing networks of exponential servers where customers cannot leave the network. Jackson's theorem cannot be applied to closed networks because the queue length at a node in the closed network is limited by the population of the network. The Gordon–Newell theorem calculates the open network solution and then eliminates the infeasible states by renormalizing the probabilities. Calculation of the normalizing constant makes the treatment more awkward as the whole state space must be enumerated. Buzen's algorithm or mean value analysis can be used to calculate the normalizing constant more efficiently.
In queueing theory, a discipline within the mathematical theory of probability, the M/M/c queue is a multi-server queueing model. In Kendall's notation it describes a system where arrivals form a single queue and are governed by a Poisson process, there are c servers, and job service times are exponentially distributed. It is a generalisation of the M/M/1 queue which considers only a single server. The model with infinitely many servers is the M/M/∞ queue.
In probability theory, uniformization method, is a method to compute transient solutions of finite state continuous-time Markov chains, by approximating the process by a discrete-time Markov chain. The original chain is scaled by the fastest transition rate γ, so that transitions occur at the same rate in every state, hence the name. The method is simple to program and efficiently calculates an approximation to the transient distribution at a single point in time. The method was first introduced by Winfried Grassmann in 1977.
In queueing theory, a discipline within the mathematical theory of probability, an M/D/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times are fixed (deterministic). The model name is written in Kendall's notation. Agner Krarup Erlang first published on this model in 1909, starting the subject of queueing theory. An extension of this model with more than one server is the M/D/c queue.
In probability theory, the matrix analytic method is a technique to compute the stationary probability distribution of a Markov chain which has a repeating structure and a state space which grows unboundedly in no more than one dimension. Such models are often described as M/G/1 type Markov chains because they can describe transitions in an M/G/1 queue. The method is a more complicated version of the matrix geometric method and is the classical solution method for M/G/1 chains.
In probability theory, Kelly's lemma states that for a stationary continuous-time Markov chain, a process defined as the time-reversed process has the same stationary distribution as the forward-time process. The theorem is named after Frank Kelly.
In probability theory, the matrix geometric method is a method for the analysis of quasi-birth–death processes, continuous-time Markov chain whose transition rate matrices with a repetitive block structure. The method was developed "largely by Marcel F. Neuts and his students starting around 1975."
In probability theory, a transition-rate matrix is an array of numbers describing the instantaneous rate at which a continuous-time Markov chain transitions between states.