Hunt process

Last updated

In probability theory, a Hunt process is a type of Markov process, named for mathematician Gilbert A. Hunt who first defined them in 1957. Hunt processes were important in the study of probabilistic potential theory until they were superseded by right processes in the 1970s.

Contents

History

Background

In the 1930-50s the work of mathematicians such as Joseph Doob, William Feller, Mark Kac, and Shizuo Kakutani developed connections between Markov processes and potential theory. [1]

In 1957-8 Gilbert A. Hunt published a triplet of papers [2] [3] [4] which deepened that connection. The impact of these papers on the probabilist community of the time was significant. Joseph Doob said that "Hunt’s great papers on the potential theory generated by Markov transition functions revolutionized potential theory." [5] Ronald Getoor described them as "a monumental work of nearly 170 pages that contained an enormous amount of truly original mathematics." [6] Gustave Choquet wrote that Hunt's papers were "fundamental memoirs which were renewing at the same time potential theory and the theory of Markov processes by establishing a precise link, in a very general framework, between an important class of Markov processes and the class of kernels in potential theory which French probabilists had just been studying." [7]

One of Hunt's contributions was to group together several properties that a Markov process should have in order to be studied via potential theory, which he called "hypothesis (A)". A stochastic process satisfies hypothesis (A) if the following three assumptions hold: [2]

First assumption: is a Markov process on a Polish space with càdlàg paths.
Second assumption: satisfies the strong Markov property.
Third assumption: is quasi-left continuous on .

Processes satisfying hypothesis (A) soon became known as Hunt processes. If the third assumption is slightly weakened so that quasi-left continuity holds only on the lifetime of , then is called a "standard process", a term that was introduced by Eugene Dynkin. [8] [9]

Rise and fall

The book "Markov Processes and Potential Theory" [10] (1968) by Blumenthal and Getoor codified standard and Hunt processes as the archetypal Markov processes. [11] Over the next few years probabilistic potential theory was concerned almost exclusively with these processes.

Of the three assumptions contained in Hunt's hypothesis (A), the most restrictive is quasi-left continuity. Getoor and Glover write: "In proving many of his results, Hunt assumed certain additional regularity hypotheses about his processes. ... It slowly became clear that it was necessary to remove many of these regularity hypotheses in order to advance the theory." [12] Already in the 1960s attempts were being made to assume quasi-left continuity only when necessary. [13]

In 1970, Chung-Tuo Shih extended two of Hunt's fundamental results, [lower-alpha 1] completely removing the need for left limits (and thus also quasi-left continuity). [14] This led to the definition of right processes as the new class of Markov processes for which potential theory could work. [15] Already in 1975, Getoor wrote that Hunt processes were "mainly of historical interest". [16] By the time that Michael Sharpe published his book "General Theory of Markov Processes" in 1988, Hunt and standard processes were considered obsolete in probabilistic potential theory. [15]

Hunt processes are still studied by mathematicians, most often in relation to Dirichlet forms. [17] [18] [19]

Definition

Brief definition

A Hunt process is a strong Markov process on a Polish space that is càdlàg and quasi-left continuous; that is, if is an increasing sequence of stopping times with limit , then

Verbose definition

Let be a Radon space and the -algebra of universally measurable subsets of , and let be a Markov semigroup on that preserves . A Hunt process is a collection satisfying the following conditions: [20]

(i) is a filtered measurable space, and each is a probability measure on .
(ii) For every , is an -valued stochastic process on , and is adapted to .
(iii) (normality) For every , .
(iv) (Markov property) For every , and for all , .
(v) is a collection of maps such that for each , and
(vi) is augmented and right continuous.
(vii) (right-continuity) For every , every , and every -excessive (with respect to ) function , the map is almost surely right continuous under .
(viii) (quasi-left continuity) For every , if is an increasing sequence of stopping times with limit , then .

Sharpe [20] shows in Lemma 2.6 that conditions (i)-(v) imply measurability of the map for all , and in Theorem 7.4 that (vi)-(vii) imply the strong Markov property with respect to .

Connection to other Markov processes

The following inclusions hold among various classes of Markov process: [21] [22]

{Lévy} {Itô} {Feller} {Hunt} {special standard} {standard} {right} {strong Markov}

Time-changed Itô processes

In 1980 Çinlar et al. [23] proved that any real-valued Hunt process is semimartingale if and only if it is a random time-change of an Itô process. More precisely, [24] a Hunt process on (equipped with the Borel -algebra) is a semimartingale if and only if there is an Itô process and a measurable function with such that , where Itô processes were first named due to their role in this theorem, [25] though Itô had previously studied them. [26]

See also

Notes

  1. These are Propositions 2.1 and 2.2 of "Markoff Processes and Potentials I". Blumenthal and Getoor had previously extended these from Hunt processes to standard processes in Theorem III.6.1 of their 1968 book.

Related Research Articles

<span class="mw-page-title-main">Stochastic process</span> Collection of random variables

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a sequence of random variables in a probability space, where the index of the sequence often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.

<span class="mw-page-title-main">Markov property</span> Memoryless property of a stochastic process

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.

<span class="mw-page-title-main">Itô calculus</span> Calculus of stochastic differential equations

Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion. It has important applications in mathematical finance and stochastic differential equations.

In applied probability, a Markov additive process (MAP) is a bivariate Markov process where the future states depends only on one of the variables.

In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.

<span class="mw-page-title-main">Local time (mathematics)</span>

In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields.

In the study of stochastic processes, a stochastic process is adapted if information about the value of the process at a given time is available at that same time. An informal interpretation is that X is adapted if and only if, for every realisation and every n, Xn is known at time n. The concept of an adapted process is essential, for instance, in the definition of the Itō integral, which only makes sense if the integrand is an adapted process.

In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals.

In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In probability theory, a real valued stochastic process X is called a semimartingale if it can be decomposed as the sum of a local martingale and a càdlàg adapted finite-variation process. Semimartingales are "good integrators", forming the largest class of processes with respect to which the Itô integral and the Stratonovich integral can be defined.

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In probability theory, an interacting particle system (IPS) is a stochastic process on some configuration space given by a site space, a countably-infinite-order graph and a local state space, a compact metric space . More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata.

In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loosely, it states that any right continuous Feller process on starting from deterministic point has also deterministic initial movement.

The Engelbert–Schmidt zero–one law is a theorem that gives a mathematical criterion for an event associated with a continuous, non-decreasing additive functional of Brownian motion to have probability either 0 or 1, without the possibility of an intermediate value. This zero-one law is used in the study of questions of finiteness and asymptotic behavior for stochastic differential equations. This 0-1 law, published in 1981, is named after Hans-Jürgen Engelbert and the probabilist Wolfgang Schmidt.

In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis and of differential geometry.

The Yamada–Watanabe theorem is a result from probability theory saying that for a large class of stochastic differential equations a weak solution with pathwise uniqueness implies a strong solution and uniqueness in distribution. In its original form, the theorem was stated for -dimensional Itô equations and was proven by the Japanese mathematicians Toshio Yamada and Shinzō Watanabe in 1971. Since then, many generalizations appeared particularly one for general semimartingales by Jean Jacod from 1980.

References

  1. Blumenthal, Getoor (1968), vii
  2. 1 2 Hunt, G.A. (1957). "Markoff Processes and Potentials I.". Illinois J. Math. 1: 44–93.
  3. Hunt, G.A. (1957). "Markoff Processes and Potentials II". Illinois J. Math. 1: 313–369.
  4. Hunt, G.A. (1958). "Markoff Processes and Potentials III". Illinois J. Math. 2: 151–213.
  5. Snell, J. Laurie (1997). "A Conversation with Joe Doob". Statistical Science. 12 (4): 301–311. doi: 10.1214/ss/1030037961 .
  6. Getoor, Ronald (1980). "Review: Probabilities and potential, by C. Dellacherie and P. A. Meyer". Bull. Amer. Math. Soc. (N.S.). 2 (3): 510–514. doi: 10.1090/s0273-0979-1980-14787-4 .
  7. As quoted by Marc Yor in Yor, Marc (2006). "The Life and Scientific Work of Paul André Meyer (August 21st, 1934 - January 30th, 2003) "Un modèle pour nous tous"". Memoriam Paul-André Meyer. Lecture Notes in Mathematics. Vol. 1874. doi:10.1007/978-3-540-35513-7_2.
  8. Blumenthal, Getoor (1968), 296
  9. Dynkin, E.B. (1960). "Transformations of Markov Processes Connected with Additive Functionals" (PDF). Berkeley Symp. on Math. Statist. and Prob. 4 (2): 117–142.
  10. Blumenthal, Robert K.; Getoor, Ronald K. (1968). Markov Processes and Potential Theory. New York: Academic Press.
  11. "Ever since the publication of the book by Blumenthal and Getoor, standard processes have been the central class of Markov processes in probabilistic potential theory", p277, Chung, Kai Lai; Walsh, John B. (2005). Markov Processes, Brownian Motion, and Time Symmetry. Grundlehren der mathematischen Wissenschaften. New York, NY: Springer. doi:10.1007/0-387-28696-9. ISBN   978-0-387-22026-0.
  12. Getoor, R.K.; Glover, J. (September 1984). "Riesz decompositions in Markov process theory". Transactions of the American Mathematical Society. 285 (1): 107–132.
  13. Chung, K.L.; Walsh, John B. (1969), "To reverse a Markov process", Acta Mathematica, 123: 225–251, doi:10.1007/BF02392389
  14. Shih, Chung-Tuo (1970). "On extending potential theory to all strong Markov processes". Ann. Inst. Fourier (Grenoble). 20 (1): 303–415. doi: 10.5802/aif.343 .
  15. 1 2 Meyer, Paul André (1989). "Review: "General theory of Markov processes" by Michael Sharpe". Bull. Amer. Math. Soc. (N.S.). 20 (21): 292–296. doi: 10.1090/S0273-0979-1989-15833-3 .
  16. p56, Getoor, Ronald K. (1975). Markov Processes: Ray Processes and Knight Processes. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer. ISBN   978-3-540-07140-2.
  17. Fukushima, Masatoshi; Oshima, Yoichi; Takeda, Masayoshi (1994). Dirichlet Forms and Symmetric Markov Processes. De Gruyter. doi:10.1515/9783110889741.
  18. Applebaum, David (2009), Lévy Processes and Stochastic Calculus, Cambridge Studies in Advanced Mathematics, Cambridge University Press, p. 196, ISBN   9780521738651
  19. Krupka, Demeter (2000), Introduction to Global Variational Geometry, North-Holland Mathematical Library, vol. 23, Elsevier, pp. 87ff, ISBN   9780080954295
  20. 1 2 Sharpe, Michael (1988). General Theory of Markov Processes. Academic Press, San Diego. ISBN   0-12-639060-6.
  21. p55, Getoor, Ronald K. (1975). Markov Processes: Ray Processes and Knight Processes. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer. ISBN   978-3-540-07140-2.
  22. p515, Çinlar, Erhan (2011). Probability and Stochastics. Graduate Texts in Mathematics. New York, NY: Springer. ISBN   978-0-387-87858-4.
  23. Çinlar, E.; Jacod, J.; Protter, P.; Sharpe, M.J. (1980). "Semimartingales and Markov processes". Z. Wahrscheinlichkeitstheorie verw. Gebiete. 54 (2): 161–219. doi:10.1007/BF00531446.
  24. Theorem 3.35, Çinlar, E.; Jacod, J. (1981). "Representation of Semimartingale Markov Processes in Terms of Wiener Processes and Poisson Random Measures". Seminar on Stochastic Processes, 1981. pp. 159–242. doi:10.1007/978-1-4612-3938-3_8.
  25. p164-5, "Thus, the processes whose extended generators have the form (1.1) are of central importance among semimartingale Markov processes, and deserve a name of their own. We call them Itô processes." Çinlar, E.; Jacod, J.; Protter, P.; Sharpe, M.J. (1980). "Semimartingales and Markov processes". Z. Wahrscheinlichkeitstheorie verw. Gebiete. 54 (2): 161–219. doi:10.1007/BF00531446.
  26. Itô, Kiyosi (1951). On stochastic differential equations. Memoirs of the American Mathematical Society. American Mathematical Society. doi:10.1090/memo/0004. ISBN   978-0-8218-1204-4.

Sources