Markov property

Last updated
A single realisation of three-dimensional Brownian motion for times 0 <= t <= 2. Brownian motion has the Markov property, as the displacement of the particle does not depend on its past displacements. Wiener process 3d.png
A single realisation of three-dimensional Brownian motion for times 0 ≤ t ≤ 2. Brownian motion has the Markov property, as the displacement of the particle does not depend on its past displacements.

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. [1] The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

Contents

The term Markov assumption is used to describe a model where the Markov property is assumed to hold, such as a hidden Markov model.

A Markov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items. [2] An example of a model for such a field is the Ising model.

A discrete-time stochastic process satisfying the Markov property is known as a Markov chain.

Introduction

A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markov or Markovian and known as a Markov process . Two famous classes of Markov process are the Markov chain and Brownian motion.

Note that there is a subtle, often overlooked and very important point that is often missed in the plain English statement of the definition. Namely that the statespace of the process is constant through time. The conditional description involves a fixed "bandwidth". For example, without this restriction we could augment any process to one which includes the complete history from a given initial condition and it would be made to be Markovian. But the state space would be of increasing dimensionality over time and does not meet the definition.

History

Definition

Let be a probability space with a filtration , for some (totally ordered) index set ; and let be a measurable space. A -valued stochastic process adapted to the filtration is said to possess the Markov property if, for each and each with ,

[3]

In the case where is a discrete set with the discrete sigma algebra and , this can be reformulated as follows:

Alternative formulations

Alternatively, the Markov property can be formulated as follows.

for all and bounded and measurable. [4]

Strong Markov property

Suppose that is a stochastic process on a probability space with natural filtration . Then for any stopping time on , we can define

.

Then is said to have the strong Markov property if, for each stopping time , conditional on the event , we have that for each , is independent of given .

The strong Markov property implies the ordinary Markov property since by taking the stopping time , the ordinary Markov property can be deduced. [5]

In forecasting

In the fields of predictive modelling and probabilistic forecasting, the Markov property is considered desirable since it may enable the reasoning and resolution of the problem that otherwise would not be possible to be resolved because of its intractability. Such a model is known as a Markov model.

Examples

Assume that an urn contains two red balls and one green ball. One ball was drawn yesterday, one ball was drawn today, and the final ball will be drawn tomorrow. All of the draws are "without replacement".

Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are:

DayOutcome 1Outcome 2
YesterdayRedGreen
TodayRedRed
TomorrowGreenRed

On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow.

This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property. Using the same experiment above, if sampling "without replacement" is changed to sampling "with replacement," the process of observed colors will have the Markov property. [6]

An application of the Markov property in a generalized form is in Markov chain Monte Carlo computations in the context of Bayesian statistics.

See also

Related Research Articles

<span class="mw-page-title-main">Martingale (probability theory)</span> Model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.

In mathematics, a filtration is an indexed family of subobjects of a given algebraic structure , with the index running over some totally ordered index set , subject to the condition that

<span class="mw-page-title-main">Stopping time</span> Time at which a random variable stops exhibiting a behavior of interest

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.

In the study of stochastic processes, an adapted process is one that cannot "see into the future". An informal interpretation is that X is adapted if and only if, for every realisation and every n, Xn is known at time n. The concept of an adapted process is essential, for instance, in the definition of the Itō integral, which only makes sense if the integrand is an adapted process.

In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals.

In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.

In the study of stochastic processes in mathematics, a hitting time is the first time at which a given process "hits" a given subset of the state space. Exit times and return times are also examples of hitting times.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In probability theory, the optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far. Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general state spaces.

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process starting at zero. The theorem was proved by and is named for Joseph L. Doob.

<span class="mw-page-title-main">Reflection principle (Wiener process)</span>

In the theory of probability for stochastic processes, the reflection principle for a Wiener process states that if the path of a Wiener process f(t) reaches a value f(s) = a at time t = s, then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a. More formally, the reflection principle refers to a lemma concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t. It is a corollary of the strong Markov property of Brownian motion.

In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loosely, it states that any right continuous Feller process on starting from deterministic point has also deterministic initial movement.

In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are totally ordered collections of subsets that are used to model the information that is available at a given point and therefore play an important role in the formalization of random (stochastic) processes.

In probability theory, Kramkov's optional decomposition theorem is a mathematical theorem on the decomposition of a positive supermartingale with respect to a family of equivalent martingale measures into the form

References

  1. Markov, A. A. (1954). Theory of Algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [Jerusalem, Israel Program for Scientific Translations, 1961; available from Office of Technical Services, United States Department of Commerce] Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algorifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
  2. Dodge, Yadolah. (2006) The Oxford Dictionary of Statistical Terms, Oxford University Press. ISBN   0-19-850994-4
  3. Durrett, Rick. Probability: Theory and Examples. Fourth Edition. Cambridge University Press, 2010.
  4. Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin. ISBN   3-540-04758-1.
  5. Ethier, Stewart N. and Kurtz, Thomas G. Markov Processes: Characterization and Convergence. Wiley Series in Probability and Mathematical Statistics, 1986, p. 158.
  6. "Example of a stochastic process which does not have the Markov property". Stack Exchange . Retrieved 2020-07-07.