Reflection principle (Wiener process)

Last updated
Simulation of Wiener process (black curve). When the process reaches the crossing point at a=50 at t
[?]
{\displaystyle \approx }
3000, both the original process and its reflection (red curve) about the a=50 line (blue line) are shown. After the crossing point, both black and red curves have the same distribution. Wiener process and its reflection upon reaching a crossing point.png
Simulation of Wiener process (black curve). When the process reaches the crossing point at a=50 at t3000, both the original process and its reflection (red curve) about the a=50 line (blue line) are shown. After the crossing point, both black and red curves have the same distribution.

In the theory of probability for stochastic processes, the reflection principle for a Wiener process states that if the path of a Wiener process f(t) reaches a value f(s) = a at time t = s, then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a. [1] More formally, the reflection principle refers to a lemma concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t. It is a corollary of the strong Markov property of Brownian motion.

Contents

Statement

If is a Wiener process, and is a threshold (also called a crossing point), then the lemma states:

Assuming , due to the continuity of Wiener processes, each path (one sampled realization) of Wiener process on which finishes at or above value/level/threshold/crossing point the time ( ) must have crossed (reached) a threshold ( ) at some earlier time for the first time . (It can cross level multiple times on the interval , we take the earliest.)

For every such path, you can define another path on that is reflected or vertically flipped on the sub-interval symmetrically around level from the original path. These reflected paths are also samples of the Wiener process reaching value on the interval , but finish below . Thus, of all the paths that reach on the interval , half will finish below , and half will finish above. Hence, the probability of finishing above is half that of reaching .

In a stronger form, the reflection principle says that if is a stopping time then the reflection of the Wiener process starting at , denoted , is also a Wiener process, where:

and the indicator function and is defined similarly. The stronger form implies the original lemma by choosing .

Proof

The earliest stopping time for reaching crossing point a, , is an almost surely bounded stopping time. Then we can apply the strong Markov property to deduce that a relative path subsequent to , given by , is also simple Brownian motion independent of . Then the probability distribution for the last time is at or above the threshold in the time interval can be decomposed as

.

By the tower property for conditional expectations, the second term reduces to:

since is a standard Brownian motion independent of and has probability of being less than . The proof of the lemma is completed by substituting this into the second line of the first equation. [2]

.

Consequences

The reflection principle is often used to simplify distributional properties of Brownian motion. Considering Brownian motion on the restricted interval then the reflection principle allows us to prove that the location of the maxima , satisfying , has the arcsine distribution. This is one of the Lévy arcsine laws. [3]

Related Research Articles

<span class="mw-page-title-main">Brownian motion</span> Random motion of particles suspended in a fluid

Brownian motion, or pedesis, is the random motion of particles suspended in a medium.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well.

<span class="mw-page-title-main">Martingale (probability theory)</span> Model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.

The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations (PDEs) and stochastic processes. In 1947, when Kac and Feynman were both Cornell faculty, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question.

<span class="mw-page-title-main">Stopping time</span> Time at which a random variable stops exhibiting a behavior of interest

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

In the branch of mathematics called homological algebra, a t-structure is a way to axiomatize the properties of an abelian subcategory of a derived category. A t-structure on consists of two subcategories of a triangulated category or stable infinity category which abstract the idea of complexes whose cohomology vanishes in positive, respectively negative, degrees. There can be many distinct t-structures on the same category, and the interplay between these structures has implications for algebra and geometry. The notion of a t-structure arose in the work of Beilinson, Bernstein, Deligne, and Gabber on perverse sheaves.

In mathematics, quadratic variation is used in the analysis of stochastic processes such as Brownian motion and other martingales. Quadratic variation is just one kind of variation of a process.

<span class="mw-page-title-main">Classical Wiener space</span>

In mathematics, classical Wiener space is the collection of all continuous functions on a given domain, taking values in a metric space. Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.

In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.

In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path. This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

<span class="mw-page-title-main">Brownian excursion</span> Stochastic process

In probability theory a Brownian excursion process is a stochastic process that is closely related to a Wiener process. Realisations of Brownian excursion processes are essentially just realizations of a Wiener process selected to satisfy certain conditions. In particular, a Brownian excursion process is a Wiener process conditioned to be positive and to take the value 0 at time 1. Alternatively, it is a Brownian bridge process conditioned to be positive. BEPs are important because, among other reasons, they naturally arise as the limit process of a number of conditional functional central limit theorems.

In mathematics, a càdlàg, RCLL, or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space.

In the mathematical theory of probability, Brownian meander is a continuous non-homogeneous Markov process defined as follows:

In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. Later slight modifications are also called Lenglart's inequality.

References

  1. Jacobs, Kurt (2010). Stochastic Processes for Physicists. Cambridge University Press. pp. 57–59. ISBN   9781139486798.
  2. Mörters, P.; Peres,Y. (2010) Brownian Motion, CUP. ISBN   978-0-521-76018-8
  3. Lévy, Paul (1940). "Sur certains processus stochastiques homogènes". Compositio Mathematica. 7: 283–339. Retrieved 15 February 2013.