In probability theory, a Hunt process is a type of Markov process, named for mathematician Gilbert A. Hunt who first defined them in 1957. Hunt processes were important in the study of probabilistic potential theory until they were superseded by right processes in the 1970s.
In the 1930-50s the work of mathematicians such as Joseph Doob, William Feller, Mark Kac, and Shizuo Kakutani developed connections between Markov processes and potential theory. [1]
In 1957-8 Gilbert A. Hunt published a triplet of papers [2] [3] [4] which deepened that connection. The impact of these papers on the probabilist community of the time was significant. Joseph Doob said that "Hunt’s great papers on the potential theory generated by Markov transition functions revolutionized potential theory." [5] Ronald Getoor described them as "a monumental work of nearly 170 pages that contained an enormous amount of truly original mathematics." [6] Gustave Choquet wrote that Hunt's papers were "fundamental memoirs which were renewing at the same time potential theory and the theory of Markov processes by establishing a precise link, in a very general framework, between an important class of Markov processes and the class of kernels in potential theory which French probabilists had just been studying." [7]
One of Hunt's contributions was to group together several properties that a Markov process should have in order to be studied via potential theory, which he called "hypothesis (A)". A stochastic process satisfies hypothesis (A) if the following three assumptions hold: [2]
Processes satisfying hypothesis (A) soon became known as Hunt processes. If the third assumption is slightly weakened so that quasi-left continuity holds only on the lifetime of , then is called a "standard process", a term that was introduced by Eugene Dynkin. [8] [9]
The book "Markov Processes and Potential Theory" [10] (1968) by Blumenthal and Getoor codified standard and Hunt processes as the archetypal Markov processes. [11] Over the next few years probabilistic potential theory was concerned almost exclusively with these processes.
Of the three assumptions contained in Hunt's hypothesis (A), the most restrictive is quasi-left continuity. Getoor and Glover write: "In proving many of his results, Hunt assumed certain additional regularity hypotheses about his processes. ... It slowly became clear that it was necessary to remove many of these regularity hypotheses in order to advance the theory." [12] Already in the 1960s attempts were being made to assume quasi-left continuity only when necessary. [13]
In 1970, Chung-Tuo Shih extended two of Hunt's fundamental results, [lower-alpha 1] completely removing the need for left limits (and thus also quasi-left continuity). [14] This led to the definition of right processes as the new class of Markov processes for which potential theory could work. [15] Already in 1975, Getoor wrote that Hunt processes were "mainly of historical interest". [16] By the time that Michael Sharpe published his book "General Theory of Markov Processes" in 1988, Hunt and standard processes were considered obsolete in probabilistic potential theory. [15]
Hunt processes are still studied by mathematicians, most often in relation to Dirichlet forms. [17] [18] [19]
A Hunt process is a strong Markov process on a Polish space that is càdlàg and quasi-left continuous; that is, if is an increasing sequence of stopping times with limit , then
Let be a Radon space and the -algebra of universally measurable subsets of , and let be a Markov semigroup on that preserves . A Hunt process is a collection satisfying the following conditions: [20]
Sharpe [20] shows in Lemma 2.6 that conditions (i)-(v) imply measurability of the map for all , and in Theorem 7.4 that (vi)-(vii) imply the strong Markov property with respect to .
The following inclusions hold among various classes of Markov process: [21] [22]
{Lévy} {Itô} {Feller} {Hunt} {special standard} {standard} {right} {strong Markov}
In 1980 Çinlar et al. [23] proved that any real-valued Hunt process is semimartingale if and only if it is a random time-change of an Itô process. More precisely, [24] a Hunt process on (equipped with the Borel -algebra) is a semimartingale if and only if there is an Itô process and a measurable function with such that , where
Itô processes were first named due to their role in this theorem, [25] though Itô had previously studied them. [26]
In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a sequence of random variables in a probability space, where the index of the sequence often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion. It has important applications in mathematical finance and stochastic differential equations.
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.
In stochastic processes, the Stratonovich integral or Fisk–Stratonovich integral is a stochastic integral, the most common alternative to the Itô integral. Although the Itô integral is the usual choice in applied mathematics, the Stratonovich integral is frequently used in physics.
In applied probability, a Markov additive process (MAP) is a bivariate Markov process where the future states depends only on one of the variables.
In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields.
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
In probability theory, a real valued stochastic process X is called a semimartingale if it can be decomposed as the sum of a local martingale and a càdlàg adapted finite-variation process. Semimartingales are "good integrators", forming the largest class of processes with respect to which the Itô integral and the Stratonovich integral can be defined.
An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.
In the mathematical theory of probability, a Borel right process, named after Émile Borel, is a particular kind of continuous-time random process.
In mathematics, some boundary value problems can be solved using the methods of stochastic analysis. Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion. However, it turns out that for a large class of semi-elliptic second-order partial differential equations the associated Dirichlet boundary value problem can be solved using an Itō process that solves an associated stochastic differential equation.
In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.
In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loosely, it states that any right continuous Feller process on starting from deterministic point has also deterministic initial movement.
In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis and differential geometry.
In stochastic calculus, the Ogawa integral, also called the non-causal stochastic integral, is a stochastic integral for non-adapted processes as integrands. The corresponding calculus is called non-causal calculus in order to distinguish it from the anticipating calculus of the Skorokhod integral. The term causality refers to the adaptation to the natural filtration of the integrator.