Harris chain

Last updated

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. [1] Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general (possibly uncountably infinite) state spaces.

Contents

Definition

Let be a Markov chain on a general state space with stochastic kernel . The kernel represents a generalized one-step transition probability law, so that for all states in and all measurable sets . The chain is a Harris chain [2] if there exists , and probability measure with such that

  1. If , then for all .
  2. If and (where is measurable), then .

The first part of the definition ensures that the chain returns to some state within with probability 1, regardless of where it starts. It follows that it visits state infinitely often (with probability 1). The second part implies that once the Markov chain is in state , its next-state can be generated with the help of an independent Bernoulli coin flip. To see this, first note that the parameter must be between 0 and 1 (this can be shown by applying the second part of the definition to the set ). Now let be a point in and suppose . To choose the next state , independently flip a biased coin with success probability . If the coin flip is successful, choose the next state according to the probability measure . Else (and if ), choose the next state according to the measure (defined for all measurable subsets ).

Two random processes and that have the same probability law and are Harris chains according to the above definition can be coupled as follows: Suppose that and , where and are points in . Using the same coin flip to decide the next-state of both processes, it follows that the next states are the same with probability at least .

Examples

Example 1: Countable state space

Let Ω be a countable state space. The kernel K is defined by the one-step conditional transition probabilities P[Xn+1 = y | Xn = x] for x,y ∈ Ω. The measure ρ is a probability mass function on the states, so that ρ(x)  0 for all x ∈ Ω, and the sum of the ρ(x) probabilities is equal to one. Suppose the above definition is satisfied for a given set A ⊆ Ω and a given parameter ε > 0. Then P[Xn+1 = c | Xn = x] ≥ ερ(c) for all xA and all c ∈ Ω.

Example 2: Chains with continuous densities

Let {Xn}, XnRd be a Markov chain with a kernel that is absolutely continuous with respect to Lebesgue measure:

K(x, dy) = K(x, y) dy

such that K(x, y) is a continuous function.

Pick (x0, y0) such that K(x0, y0 ) > 0, and let A and Ω be open sets containing x0 and y0 respectively that are sufficiently small so that K(x, y) ≥ ε > 0 on A ×  Ω. Letting ρ(C) = |Ω  C|/|Ω| where |Ω| is the Lebesgue measure of Ω, we have that (2) in the above definition holds. If (1) holds, then {Xn} is a Harris chain.

Reducibility and periodicity

In the following ; i.e. is the first time after time 0 that the process enters region . Let denote the initial distribution of the Markov chain, i.e. .

Definition: If for all , , then the Harris chain is called recurrent.

Definition: A recurrent Harris chain is aperiodic if , such that ,

Theorem: Let be an aperiodic recurrent Harris chain with stationary distribution . If then as , where denotes the total variation for signed measures defined on the same measurable space.

Related Research Articles

Limit inferior and limit superior Bounds of a sequence

In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting bounds on the sequence. They can be thought of in a similar fashion for a function. For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit.

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).

In mathematics, the Lévy–Prokhorov metric is a metric on the collection of probability measures on a given metric space. It is named after the French mathematician Paul Lévy and the Soviet mathematician Yuri Vasilyevich Prokhorov; Prokhorov introduced it in 1956 as a generalization of the earlier Lévy metric.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In mathematics, the attractor of a random dynamical system may be loosely thought of as a set to which the system evolves after a long enough time. The basic idea is the same as for a deterministic dynamical system, but requires careful treatment because random dynamical systems are necessarily non-autonomous. This requires one to consider the notion of a pullback attractor or attractor in the pullback sense.

The Frank–Tamm formula yields the amount of Cherenkov radiation emitted on a given frequency as a charged particle moves through a medium at superluminal velocity. It is named for Russian physicists Ilya Frank and Igor Tamm who developed the theory of the Cherenkov effect in 1937, for which they were awarded a Nobel Prize in Physics in 1958.

Convergence in measure is either of two distinct mathematical concepts both of which generalize the concept of convergence in probability.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In mathematics, Schilder's theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, Schilder's theorem gives an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path. This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.

In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xnx then g(xn) → g(x). The continuous mapping theorem states that this will also be true if we replace the deterministic sequence {xn} with a sequence of random variables {Xn}, and replace the standard notion of convergence of real numbers “→” with one of the types of convergence of random variables.

This article is supplemental for “Convergence of random variables” and provides proofs for selected results.

A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

In additive number theory, an area of mathematics, the Erdős–Tetali theorem is an existence theorem concerning economical additive bases of every order. More specifically, it states that for every fixed integer , there exists a subset of the natural numbers satisfying

References

  1. Asmussen, Søren (2003). "Further Topics in Renewal Theory and Regenerative Processes". Applied Probability and Queues. Stochastic Modelling and Applied Probability. Vol. 51. pp. 186–219. doi:10.1007/0-387-21525-5_7. ISBN   978-0-387-00211-8.
  2. R. Durrett. Probability: Theory and Examples. Thomson, 2005. ISBN   0-534-42441-4.