This article is about the property of a stochastic process. For the class of properties of a finitely presented group, see Adian–Rabin theorem.
A single realisation of three-dimensional Brownian motion for times 0 ≤ t ≤ 2. Brownian motion has the Markov property, as the displacement of the particle does not depend on its past displacements.
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the RussianmathematicianAndrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.
The term Markov assumption is used to describe a model where the Markov property is assumed to hold, such as a hidden Markov model.
A Markov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items.[1] An example of a model for such a field is the Ising model.
A discrete-time stochastic process satisfying the Markov property is known as a Markov chain.
Introduction
A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markov or Markovian and known as a Markov process. Two famous classes of Markov process are the Markov chain and Brownian motion.
Note that there is a subtle, often overlooked and very important point that is often missed in the plain English statement of the definition: the statespace of the process is constant through time. The conditional description involves a fixed "bandwidth". For example, without this restriction we could augment any process to one which includes the complete history from a given initial condition and it would be made to be Markovian. But the state space would be of increasing dimensionality over time and does not meet the definition.
In the case where is a discrete set with the discrete sigma algebra and , this can be reformulated as follows:
is called time-homogeneous if for all the weak Markov property holds:[3]
.
The newly introduced probability measure , , has the following intuition: It gives the probability that the process lies in some set at time , when it was started in at time zero. The function , , is also called the transition function of .
Alternative formulations
There exists multiple alternative formulations of the elementary Markov property described above. The following are all equivalent[4][5]:
For all the -algebras and are conditionally independent given . In other words, for all , :
.
For all , :
.
For all , :
.
For all and bounded and -measurable
.
For all and bounded and measurable
.
For all and continuous with compact support
.
For all and continuous with compact support
.
If there exists a so-called shift-semigroup, i.e., functions such that
Then is said to have the strong Markov property if, for each stopping time, conditional on the event , we have that for each , is independent of given .
The strong Markov property implies the ordinary Markov property since by taking the stopping time , the ordinary Markov property can be deduced.[6]
Examples
Intuitive example
Assume that an urn contains two red balls and one green ball. One ball was drawn yesterday, one ball was drawn today, and the final ball will be drawn tomorrow. All of the draws are "without replacement".
Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are:
Day
Outcome 1
Outcome 2
Yesterday
Red
Green
Today
Red
Red
Tomorrow
Green
Red
On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow.
This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property. Using the same experiment above, if sampling "without replacement" is changed to sampling "with replacement," the process of observed colors will have the Markov property.[7]
where is a -dimensional Brownian motion and are autonomous (i.e., they do not depend on time) Lipschitz functions, is time-homogeneous and has the strong Markov property. If are not autonomous, then still has the elementary Markov property.[3]
Applications
Forecasting
In the fields of predictive modelling and probabilistic forecasting, the Markov property is considered desirable since it may enable the reasoning and resolution of the problem that otherwise would not be possible to be resolved because of its intractability. Such a model is known as a Markov model.
1 2 Protter, Philip (1992). Stochastic Integration and Differential Equations (2nded.). Springer-Verlag Berlin Heidelberg. pp.235–242. ISBN978-3-662-02619-9.
1 2 Chung, Kai Lai; Walsh, John B. (2005). Markov Processes, Brownian Motion, and Time Symmetry (2nded.). Springer Science+Business Media. pp.1–5. ISBN978-0387-22026-0.
↑ Ethier, Stewart N. and Kurtz, Thomas G.Markov Processes: Characterization and Convergence. Wiley Series in Probability and Mathematical Statistics, 1986, p. 158.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.