Brownian sheet

Last updated

In mathematics, a Brownian sheet or multiparametric Brownian motion is a multiparametric generalization of the Brownian motion to a Gaussian random field. This means we generalize the "time" parameter of a Brownian motion from to .

Contents

The exact dimension of the space of the new time parameter varies from authors. We follow John B. Walsh and define the -Brownian sheet, while some authors define the Brownian sheet specifically only for , what we call the -Brownian sheet. [1]

This definition is due to Nikolai Chentsov, there exist a slightly different version due to Paul Lévy.

(n,d)-Brownian sheet

A -dimensional gaussian process is called a -Brownian sheet if

for . [2]

Properties

From the definition follows

almost surely.

Examples

Lévy's definition of the multiparametric Brownian motion

In Lévy's definition one replaces the covariance condition above with the following condition

where is the euclidean metric on . [3]

Existence of abstract Wiener measure

Consider the space of continuous functions of the form satisfying

This space becomes a separable Banach space when equipped with the norm

Notice this space includes densely the space of zero at infinity equipped with the uniform norm, since one can bound the uniform norm with the norm of from above through the Fourier inversion theorem.

Let be the space of tempered distributions. One can then show that there exist a suitalbe separable Hilbert space (and Sobolev space)

that is continuously embbeded as a dense subspace in and thus also in and that there exist a probability measure on such that the triple

is an abstract Wiener space.

A path is -almost surely

This handles of a Brownian sheet in the case . For higher dimensional , the construction is similar.

See also

Literature

Related Research Articles

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

<span class="mw-page-title-main">Cramér–Rao bound</span> Lower bound on variance of an estimator

In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

<span class="mw-page-title-main">Ornstein–Uhlenbeck process</span> Stochastic process modeling random walk with friction

In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.

<span class="mw-page-title-main">Empirical distribution function</span> Distribution function associated with the empirical measure of a sample

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.

<span class="mw-page-title-main">Donsker's theorem</span>

In probability theory, Donsker's theorem, named after Monroe D. Donsker, is a functional extension of the central limit theorem.

<span class="mw-page-title-main">Inverse Gaussian distribution</span> Family of continuous probability distributions

In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

In mathematics the symmetrization methods are algorithms of transforming a set to a ball with equal volume and centered at the origin. B is called the symmetrized version of A, usually denoted . These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter. The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method. From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball. Another problem is that the Newtonian capacity of a set A is minimized by and this was proved by Polya and G. Szego (1951) using circular symmetrization.

Exponential Tilting (ET), Exponential Twisting, or Exponential Change of Measure (ECM) is a distribution shifting technique used in many parts of mathematics. The different exponential tiltings of a random variable is known as the natural exponential family of .

In probability theory, Lévy's stochastic area is a stochastic process that describes the enclosed area of a trajectory of a two-dimensional Brownian motion and its chord. The process was introduced by Paul Lévy in 1940, and in 1950 he computed the characteristic function and conditional characteristic function.

References

  1. Walsh, John B. (1986). An introduction to stochastic partial differential equations. Springer Berlin Heidelberg. p. 269. ISBN   978-3-540-39781-6.
  2. Davar Khoshnevisan und Yimin Xiao (2004), Images of the Brownian Sheet, arXiv: math/0409491
  3. Ossiander, Mina; Pyke, Ronald (1985). "Lévy's Brownian motion as a set-indexed process and a related central limit theorem". Stochastic Processes and their Applications. 21 (1): 133–145. doi:10.1016/0304-4149(85)90382-5.
  4. Stroock, Daniel (2011), Probability theory: an analytic view (2nd ed.), Cambridge, p. 349-352