Stochastic processes and boundary value problems

Last updated

In mathematics, some boundary value problems can be solved using the methods of stochastic analysis . Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion. However, it turns out that for a large class of semi-elliptic second-order partial differential equations the associated Dirichlet boundary value problem can be solved using an Itō process that solves an associated stochastic differential equation.

Contents

Introduction: Kakutani's solution to the classical Dirichlet problem

Let be a domain (an open and connected set) in . Let be the Laplace operator, let be a bounded function on the boundary , and consider the problem:

It can be shown that if a solution exists, then is the expected value of at the (random) first exit point from for a canonical Brownian motion starting at . See theorem 3 in Kakutani 1944, p. 710.

The Dirichlet–Poisson problem

Let be a domain in and let be a semi-elliptic differential operator on of the form:

where the coefficients and are continuous functions and all the eigenvalues of the matrix are non-negative. Let and . Consider the Poisson problem:

The idea of the stochastic method for solving this problem is as follows. First, one finds an Itō diffusion whose infinitesimal generator coincides with on compactly-supported functions . For example, can be taken to be the solution to the stochastic differential equation:

where is n-dimensional Brownian motion, has components as above, and the matrix field is chosen so that:

For a point , let denote the law of given initial datum , and let denote expectation with respect to . Let denote the first exit time of from .

In this notation, the candidate solution for (P1) is:

provided that is a bounded function and that:

It turns out that one further condition is required:

For all , the process starting at almost surely leaves in finite time. Under this assumption, the candidate solution above reduces to:

and solves (P1) in the sense that if denotes the characteristic operator for (which agrees with on functions), then:

Moreover, if satisfies (P2) and there exists a constant such that, for all :

then .

Related Research Articles

Brownian motion Random motion of particles suspended in a fluid

Brownian motion, or pedesis, is the random motion of particles suspended in a fluid resulting from their collision with the fast-moving molecules in the fluid.

Distributions are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions than classical solutions, or appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta function.

In physics, Langevin equation is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation.

Fokker–Planck equation Partial differential equation

In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck, and is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered the concept in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation, and in this context it is equivalent to the convection–diffusion equation. The case with zero diffusion is known in statistical mechanics as the Liouville equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion.

The Feynman–Kac formula named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations (PDEs) and stochastic processes. In 1947 when Kac and Feynman were both on Cornell faculty, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still unproven.

Stopping time specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

Stochastic differential equation differential equations involving stochastic processes

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to model various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process. However, other types of random behaviour are possible, such as jump processes.

Linear time-invariant theory, commonly known as LTI system theory, investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time, but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus, these systems are also called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors.. It has been used in applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.

In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth statistic of an Itō diffusion at a stopping time. It may be seen as a stochastic generalization of the (second) fundamental theorem of calculus. It is named after the Russian mathematician Eugene Dynkin.

In mathematics — specifically, in stochastic analysis — an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

Harmonic measure measure on the boundary of a domain, defined using harmonic functions on the domain

In mathematics, especially potential theory, harmonic measure is a concept related to the theory of harmonic functions that arises from the solution of the classical Dirichlet problem.

In mathematics — specifically, in stochastic analysis — the Green measure is a measure associated to an Itō diffusion. There is an associated Green formula representing suitably smooth functions in terms of the Green measure and first exit times of the diffusion. The concepts are named after the British mathematician George Green and are generalizations of the classical Green's function and Green formula to the stochastic case using Dynkin's formula.

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a partial differential operator that encodes a great deal of information about the process. The generator is used in evolution equations such as the Kolmogorov backward equation ; its L2 Hermitian adjoint is used in evolution equations such as the Fokker–Planck equation .

In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.

In stochastic analysis, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example a Wiener process. The theory was developed in the 1990s by Terry Lyons. Several accounts of the theory are available.

In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.

In Monte Carlo Estimation, exponential tilting (ET), exponential twisting, or exponential change of measure (ECM) is a distribution shifting technique commonly used in rare-event simulation, and rejection and importance sampling in particular. Exponential tilting is also used in Esscher tilting, an indirect Edgeworth approximation technique. The earliest formalization of ECM is often attributed to Esscher with its use in importance sampling being attributed to David Siegmund. ET is known as the Esscher transform in mathematical finance and is used in such contexts as insurance futures pricing.

References