G-expectation

Last updated

In probability theory, the g-expectation is a nonlinear expectation based on a backwards stochastic differential equation (BSDE) originally developed by Shige Peng. [1]

Contents

Definition

Given a probability space with is a (d-dimensional) Wiener process (on that space). Given the filtration generated by , i.e. , let be measurable. Consider the BSDE given by:

Then the g-expectation for is given by . Note that if is an m-dimensional vector, then (for each time ) is an m-dimensional vector and is an matrix.

In fact the conditional expectation is given by and much like the formal definition for conditional expectation it follows that for any (and the function is the indicator function). [1]

Existence and uniqueness

Let satisfy:

  1. is an -adapted process for every
  2. the L2 space (where is a norm in )
  3. is Lipschitz continuous in , i.e. for every and it follows that for some constant

Then for any random variable there exists a unique pair of -adapted processes which satisfy the stochastic differential equation. [2]

In particular, if additionally satisfies:

  1. is continuous in time ()
  2. for all

then for the terminal random variable it follows that the solution processes are square integrable. Therefore is square integrable for all times . [3]

See also

Related Research Articles

Random variable Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events.

In mathematical analysis and in probability theory, a σ-algebra on a set X is a collection Σ of subsets of X, is closed under complement, and is closed under countable unions and countable intersections. The pair is called a measurable space.

In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.

In probability and statistics, a Bernoulli process is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variablesXi are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin. Every variable Xi in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes ; this generalization is known as the Bernoulli scheme.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the “conditions” are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

In probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-systems or d-system. These set families have applications in measure theory and probability.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet (1948) who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”

In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals.

In mathematics, the Kolmogorov extension theorem is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the English mathematician Percy John Daniell and the Russian mathematician Andrey Nikolaevich Kolmogorov.

In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms.

In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In probability theory, a nonlinear expectation is a nonlinear generalization of the expectation. Nonlinear expectations are useful in utility theory as they more closely match human behavior than traditional expectations. The common use of nonlinear expectations is in assessing risks under uncertainty. Generally, nonlinear expectations are categorized into sub-linear and super-linear expectations dependent on the additive properties of the given sets. Much of the study of nonlinear expectation is attributed to work of mathematicians within the past two decades.

A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.

The Engelbert–Schmidt zero–one law is a theorem that gives a mathematical criterion for an event associated with a continuous, non-decreasing additive functional of Brownian motion to have probability either 0 or 1, without the possibility of an intermediate value. This zero-one law is used in the study of questions of finiteness and asymptotic behavior for stochastic differential equations. This 0-1 law, published in 1981, is named after Hans-Jürgen Engelbert and the probabilist Wolfgang Schmidt.

References

  1. 1 2 Philippe Briand; François Coquet; Ying Hu; Jean Mémin; Shige Peng (2000). "A Converse Comparison Theorem for BSDEs and Related Properties of g-Expectation" (PDF). Electronic Communications in Probability. 5 (13): 101–117.
  2. Peng, S. (2004). "Nonlinear Expectations, Nonlinear Evaluations and Risk Measures". Stochastic Methods in Finance (PDF). Lecture Notes in Mathematics. Vol. 1856. pp. 165–138. doi:10.1007/978-3-540-44644-6_4. ISBN   978-3-540-22953-7. Archived from the original (pdf) on March 3, 2016. Retrieved August 9, 2012.
  3. Chen, Z.; Chen, T.; Davison, M. (2005). "Choquet expectation and Peng's g -expectation". The Annals of Probability. 33 (3): 1179. arXiv: math/0506598 . doi:10.1214/009117904000001053.
  4. Rosazza Gianin, E. (2006). "Risk measures via g-expectations". Insurance: Mathematics and Economics. 39: 19–65. doi:10.1016/j.insmatheco.2006.01.002.