Feller process

Last updated

In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.

Contents

Definitions

Let X be a locally compact Hausdorff space with a countable base. Let C0(X) denote the space of all real-valued continuous functions on X that vanish at infinity, equipped with the sup-norm ||f ||. From analysis, we know that C0(X) with the sup norm is a Banach space.

A Feller semigroup on C0(X) is a collection {Tt}t  0 of positive linear maps from C0(X) to itself such that

Warning: This terminology is not uniform across the literature. In particular, the assumption that Tt maps C0(X) into itself is replaced by some authors by the condition that it maps Cb(X), the space of bounded continuous functions, into itself. The reason for this is twofold: first, it allows including processes that enter "from infinity" in finite time. Second, it is more suitable to the treatment of spaces that are not locally compact and for which the notion of "vanishing at infinity" makes no sense.

A Feller transition function is a probability transition function associated with a Feller semigroup.

A Feller process is a Markov process with a Feller transition function.

Generator

Feller processes (or transition semigroups) can be described by their infinitesimal generator. A function f in C0 is said to be in the domain of the generator if the uniform limit

exists. The operator A is the generator of Tt, and the space of functions on which it is defined is written as DA.

A characterization of operators that can occur as the infinitesimal generator of Feller processes is given by the Hille–Yosida theorem. This uses the resolvent of the Feller semigroup, defined below.

Resolvent

The resolvent of a Feller process (or semigroup) is a collection of maps (Rλ)λ > 0 from C0(X) to itself defined by

It can be shown that it satisfies the identity

Furthermore, for any fixed λ > 0, the image of Rλ is equal to the domain DA of the generator A, and

Examples

See also

Related Research Articles

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.

In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem. The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948.

In mathematics, a C0-semigroup, also known as a strongly continuous one-parameter semigroup, is a generalization of the exponential function. Just as exponential functions provide solutions of scalar linear constant coefficient ordinary differential equations, strongly continuous semigroups provide solutions of linear constant coefficient ordinary differential equations in Banach spaces. Such differential equations in Banach spaces arise from e.g. delay differential equations and partial differential equations.

<span class="mw-page-title-main">Complex torus</span>

In mathematics, a complex torus is a particular kind of complex manifold M whose underlying smooth manifold is a torus in the usual sense. Here N must be the even number 2n, where n is the complex dimension of M.

In mathematics, the theta representation is a particular representation of the Heisenberg group of quantum mechanics. It gains its name from the fact that the Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. The representation was popularized by David Mumford.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In operator theory, a bounded operator T: XY between normed vector spaces X and Y is said to be a contraction if its operator norm ||T || ≤ 1. This notion is a special case of the concept of a contraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions on Hilbert space is largely due to Béla Szőkefalvi-Nagy and Ciprian Foias.

In the mathematical theory of probability, a Borel right process, named after Émile Borel, is a particular kind of continuous-time random process.

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.

In actuarial science and applied probability, ruin theory uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.

In mathematics, lifting theory was first introduced by John von Neumann in a pioneering paper from 1931, in which he answered a question raised by Alfréd Haar. The theory was further developed by Dorothy Maharam (1958) and by Alexandra Ionescu Tulcea and Cassius Ionescu Tulcea (1961). Lifting theory was motivated to a large extent by its striking applications. Its development up to 1969 was described in a monograph of the Ionescu Tulceas. Lifting theory continued to develop since then, yielding new results and applications.

In mathematics, the Poisson boundary is a measure space associated to a random walk. It is an object designed to encode the asymptotic behaviour of the random walk, i.e. how trajectories diverge when the number of steps goes to infinity. Despite being called a boundary it is in general a purely measure-theoretical object and not a boundary in the topological sense. However, in the case where the random walk is on a topological space the Poisson boundary can be related to the Martin boundary which is an analytic construction yielding a genuine topological boundary. Both boundaries are related to harmonic functions on the space via generalisations of the Poisson formula.

In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loosely, it states that any right continuous Feller process on starting from deterministic point has also deterministic initial movement.

In mathematics, an abstract differential equation is a differential equation in which the unknown function and its derivatives take values in some generic abstract space. Equations of this kind arise e.g. in the study of partial differential equations: if to one of the variables is given a privileged position and all the others are put together, an ordinary "differential" equation with respect to the variable which was put in evidence is obtained. Adding boundary conditions can often be translated in terms of considering solutions in some convenient function spaces.

In quantum mechanics, a quantum Markov semigroup describes the dynamics in a Markovian open quantum system. The axiomatic definition of the prototype of quantum Markov semigroups was first introduced by A. M. Kossakowski in 1972, and then developed by V. Gorini, A. M. Kossakowski, E. C. G. Sudarshan and Göran Lindblad in 1976.

In probability theory and ergodic theory, a Markov operator is an operator on a certain function space that conserves the mass. If the underlying measurable space is topologically sufficient rich enough, then the Markov operator admits a kernel representation. Markov operators can be linear or non-linear. Closely related to Markov operators is the Markov semigroup.

References

  1. Rogers, L.C.G. and Williams, David Diffusions, Markov Processes and Martingales volume One: Foundations, second edition, John Wiley and Sons Ltd, 1979. (page 247, Theorem 8.3)