In mathematics, the Ornstein–Uhlenbeck operator is a generalization of the Laplace operator to an infinite-dimensional setting. The Ornstein–Uhlenbeck operator plays a significant role in the Malliavin calculus.
Mathematics includes the study of such topics as quantity, structure, space, and change.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, ∇2, or Δ. The Laplacian Δf(p) of a function f at a point p, is the rate at which the average value of f over spheres centered at p deviates from f(p) as the radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations.
Consider the gradient operator ∇ acting on scalar functions f : Rn → R; the gradient of a scalar function is a vector field v = ∇f : Rn → Rn. The divergence operator div, acting on vector fields to produce scalar fields, is the adjoint operator to ∇. The Laplace operator Δ is then the composition of the divergence and gradient operators:
In vector calculus, the gradient is a multi-variable generalization of the derivative. Whereas the ordinary derivative of a function of a single variable is a scalar-valued function, the gradient of a function of several variables is a vector-valued function. Specifically, the gradient of a differentiable function of several variables, at a point , is the vector whose components are the partial derivatives of at .
In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. A vector field in the plane, can be visualised as: a collection of arrows with a given magnitude and direction, each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
In vector calculus, divergence is a vector operator that produces a scalar field, giving the quantity of a vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
acting on scalar functions to produce scalar functions. Note that A = −Δ is a positive operator, whereas Δ is a dissipative operator.
In mathematics, a dissipative operator is a linear operator A defined on a linear subspace D(A) of Banach space X, taking values in X such that for all λ > 0 and all x ∈ D(A)
Using spectral theory, one can define a square root (1 − Δ)1/2 for the operator (1 − Δ). This square root satisfies the following relation involving the Sobolev H1-norm and L2-norm for suitable scalar functions f:
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.
In mathematics, a square root of a number a is a number y such that y2 = a; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is a. For example, 4 and −4 are square roots of 16 because 42 = (−4)2 = 16. Every nonnegative real number a has a unique nonnegative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the principal square root of 9 is 3, which is denoted by √9 = 3, because 32 = 3 · 3 = 9 and 3 is nonnegative. The term (or number) whose square root is being considered is known as the radicand. The radicand is the number or expression underneath the radical sign, in this example 9.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Often, when working on Rn, one works with respect to Lebesgue measure, which has many nice properties. However, remember that the aim is to work in infinite-dimensional spaces, and it is a fact that there is no infinite-dimensional Lebesgue measure. Instead, if one is studying some separable Banach space E, what does make sense is a notion of Gaussian measure; in particular, the abstract Wiener space construction makes sense.
In measure theory, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of n-dimensional Euclidean space. For n = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called n-dimensional volume, n-volume, or simply volume. It is used throughout real analysis, in particular to define Lebesgue integration. Sets that can be assigned a Lebesgue measure are called Lebesgue-measurable; the measure of the Lebesgue-measurable set A is here denoted by λ(A).
In mathematics, a topological space is called separable if it contains a countable, dense subset; that is, there exists a sequence of elements of the space such that every nonempty open subset of the space contains at least one element of the sequence.
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space.
To get some intuition about what can be expected in the infinite-dimensional setting, consider standard Gaussian measure γn on Rn: for Borel subsets A of Rn,
This makes (Rn, B(Rn), γn) into a probability space; E will denote expectation with respect to γn.
In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same.
In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.
The gradient operator∇ acts on a (differentiable) function φ : Rn → R to give a vector field ∇φ : Rn → Rn.
The divergence operatorδ (to be more precise, δn, since it depends on the dimension) is now defined to be the adjoint of ∇ in the Hilbert space sense, in the Hilbert space L2(Rn, B(Rn), γn; R). In other words, δ acts on a vector field v : Rn → Rn to give a scalar function δv : Rn → R, and satisfies the formula
On the left, the product is the pointwise Euclidean dot product of two vector fields; on the right, it is just the pointwise multiplication of two functions. Using integration by parts, one can check that δ acts on a vector field v with components vi, i = 1, ..., n, as follows:
The change of notation from “div” to “δ” is for two reasons: first, δ is the notation used in infinite dimensions (the Malliavin calculus); secondly, δ is really the negative of the usual divergence.
The (finite-dimensional) Ornstein–Uhlenbeck operatorL (or, to be more precise, Lm) is defined by
with the useful formula that for any f and g smooth enough for all the terms to make sense,
The Ornstein–Uhlenbeck operator L is related to the usual Laplacian Δ by
Consider now an abstract Wiener space E with Cameron-Martin Hilbert space H and Wiener measure γ. Let D denote the Malliavin derivative. The Malliavin derivative D is an unbounded operator from L2(E, γ; R) into L2(E, γ; H) – in some sense, it measures “how random” a function on E is. The domain of D is not the whole of L2(E, γ; R), but is a dense linear subspace, the Watanabe-Sobolev space, often denoted by (once differentiable in the sense of Malliavin, with derivative in L2).
Again, δ is defined to be the adjoint of the gradient operator (in this case, the Malliavin derivative is playing the role of the gradient operator). The operator δ is also known the Skorokhod integral, which is an anticipating stochastic integral; it is this set-up that gives rise to the slogan “stochastic integrals are divergences”. δ satisfies the identity
for all F in and v in the domain of δ.
Then the Ornstein–Uhlenbeck operator for E is the operator L defined by
Del, or nabla, is an operator used in mathematics, in particular in vector calculus, as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes its standard derivative as defined in calculus. When applied to a field, it may denote the gradient of a scalar field, the divergence of a vector field, or the curl (rotation) of a vector field, depending on the way it is applied.
In mathematics, a Green's function of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions is its impulse response.
In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".
Scalar potential, simply stated, describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
In mathematics, the directional derivative of a multivariate differentiable function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v. It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant.
In mathematics, Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green's theorem.
Stochastic gradient descent, also known as incremental gradient descent, is an iterative method for optimizing a differentiable objective function, a stochastic approximation of gradient descent optimization. A 2018 article implicitly credits Herbert Robbins and Sutton Monro for developing SGD in their 1951 article titled "A Stochastic Approximation Method"; see Stochastic approximation for more information. It is called stochastic because samples are selected randomly instead of as a single group or in the order they appear in the training set.
In differential geometry, the Laplace operator, named after Pierre-Simon Laplace, can be generalized to operate on functions defined on surfaces in Euclidean space and, more generally, on Riemannian and pseudo-Riemannian manifolds. This more general operator goes by the name Laplace–Beltrami operator, after Laplace and Eugenio Beltrami. Like the Laplacian, the Laplace–Beltrami operator is defined as the divergence of the gradient, and is a linear operator taking functions into functions. The operator can be extended to operate on tensors as the divergence of the covariant derivative. Alternatively, the operator can be generalized to operate on differential forms using the divergence and exterior derivative. The resulting operator is called the Laplace–de Rham operator.
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, and geometry.
In mathematics, a flow formalizes the idea of the motion of particles in a fluid. Flows are ubiquitous in science, including engineering and physics. The notion of flow is basic to the study of ordinary differential equations. Informally, a flow may be viewed as a continuous motion of points over time. More formally, a flow is a group action of the real numbers on a set.
In mathematics, the Clark–Ocone theorem is a theorem of stochastic analysis. It expresses the value of some function F defined on the classical Wiener space of continuous paths starting at the origin as the sum of its mean value and an Itō integral with respect to that path. It is named after the contributions of mathematicians J.M.C. Clark (1970), Daniel Ocone (1984) and U.G. Haussmann (1978).
In mathematics, classical Wiener space is the collection of all continuous functions on a given domain, taking values in a metric space. Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener.
In mathematics — specifically, in stochastic analysis — an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.
In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a stochastic process is a partial differential operator that encodes a great deal of information about the process. The generator is used in evolution equations such as the Kolmogorov backward equation ; its L2 Hermitian adjoint is used in evolution equations such as the Fokker–Planck equation.
In differential geometry there are a number of second-order, linear, elliptic differential operators bearing the name Laplacian. This article provides an overview of some of them.
In mathematics, the Skorokhod integral, often denoted δ, is an operator of great importance in the theory of stochastic processes. It is named after the Ukrainian mathematician Anatoliy Skorokhod. Part of its importance is that it unifies several concepts:
In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral as well, although that is typically reserved for line integrals in the complex plane.