Hitting time

Last updated

In the study of stochastic processes in mathematics, a hitting time (or first hit time) is the first time at which a given process "hits" a given subset of the state space. Exit times and return times are also examples of hitting times.

Contents

Definitions

Let T be an ordered index set such as the natural numbers, the non-negative real numbers, [0, +∞), or a subset of these; elements can be thought of as "times". Given a probability space (Ω, Σ, Pr) and a measurable state space S, let be a stochastic process, and let A be a measurable subset of the state space S. Then the first hit time is the random variable defined by

The first exit time (from A) is defined to be the first hit time for S \ A, the complement of A in S. Confusingly, this is also often denoted by τA. [1]

The first return time is defined to be the first hit time for the singleton set {X0(ω)}, which is usually a given deterministic element of the state space, such as the origin of the coordinate system.

Examples

Début theorem

The hitting time of a set F is also known as the début of F. The Début theorem says that the hitting time of a measurable set F, for a progressively measurable process, is a stopping time. Progressively measurable processes include, in particular, all right and left-continuous adapted processes. The proof that the début is measurable is rather involved and involves properties of analytic sets. The theorem requires the underlying probability space to be complete or, at least, universally complete.

The converse of the Début theorem states that every stopping time defined with respect to a filtration over a real-valued time index can be represented by a hitting time. In particular, for essentially any such stopping time there exists an adapted, non-increasing process with càdlàg (RCLL) paths that takes the values 0 and 1 only, such that the hitting time of the set {0} by this process is the considered stopping time. The proof is very simple. [2]

See also

Related Research Articles

In mathematical analysis and in probability theory, a σ-algebra on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The pair is called a measurable space.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

<span class="mw-page-title-main">Martingale (probability theory)</span> Model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.

<span class="mw-page-title-main">Markov property</span> Memoryless property of a stochastic process

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

In mathematics, a filtration is an indexed family of subobjects of a given algebraic structure , with the index running over some totally ordered index set , subject to the condition that

<span class="mw-page-title-main">Stopping time</span>

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

<span class="mw-page-title-main">Stochastic differential equation</span> Differential equations involving stochastic processes

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.

In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals.

In mathematics, the Kolmogorov extension theorem is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the English mathematician Percy John Daniell and the Russian mathematician Andrey Nikolaevich Kolmogorov.

In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.

In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms.

Carathéodory's criterion is a result in measure theory that was formulated by Greek mathematician Constantin Carathéodory that characterizes when a set is Lebesgue measurable.

In probability theory, the optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far. Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general state spaces.

In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process starting at zero. The theorem was proved by and is named for Joseph L. Doob.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and

The Engelbert–Schmidt zero–one law is a theorem that gives a mathematical criterion for an event associated with a continuous, non-decreasing additive functional of Brownian motion to have probability either 0 or 1, without the possibility of an intermediate value. This zero-one law is used in the study of questions of finiteness and asymptotic behavior for stochastic differential equations. This 0-1 law, published in 1981, is named after Hans-Jürgen Engelbert and the probabilist Wolfgang Schmidt.

References

  1. Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN   978-3-540-04758-2.
  2. Fischer, Tom (2013). "On simple representations of stopping times and stopping time sigma-algebras". Statistics and Probability Letters. 83 (1): 345–349. arXiv: 1112.1603 . doi:10.1016/j.spl.2012.09.024.