G-expectation Last updated August 19, 2025 Definition Given a probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} with ( W t ) t ≥ 0 {\displaystyle (W_{t})_{t\geq 0}} is a (d -dimensional) Wiener process (on that space). Given the filtration generated by ( W t ) {\displaystyle (W_{t})} , i.e. F t = σ ( W s : s ∈ [ 0 , t ] ) {\displaystyle {\mathcal {F}}_{t}=\sigma (W_{s}:s\in [0,t])} , let X {\displaystyle X} be F T {\displaystyle {\mathcal {F}}_{T}} measurable . Consider the BSDE given by:
d Y t = g ( t , Y t , Z t ) d t − Z t d W t Y T = X {\displaystyle {\begin{aligned}dY_{t}&=g(t,Y_{t},Z_{t})\,dt-Z_{t}\,dW_{t}\\Y_{T}&=X\end{aligned}}} Then the g-expectation for X {\displaystyle X} is given by E g [ X ] := Y 0 {\displaystyle \mathbb {E} ^{g}[X]:=Y_{0}} . Note that if X {\displaystyle X} is an m -dimensional vector, then Y t {\displaystyle Y_{t}} (for each time t {\displaystyle t} ) is an m -dimensional vector and Z t {\displaystyle Z_{t}} is an m × d {\displaystyle m\times d} matrix.
In fact the conditional expectation is given by E g [ X ∣ F t ] := Y t {\displaystyle \mathbb {E} ^{g}[X\mid {\mathcal {F}}_{t}]:=Y_{t}} and much like the formal definition for conditional expectation it follows that E g [ 1 A E g [ X ∣ F t ] ] = E g [ 1 A X ] {\displaystyle \mathbb {E} ^{g}[1_{A}\mathbb {E} ^{g}[X\mid {\mathcal {F}}_{t}]]=\mathbb {E} ^{g}[1_{A}X]} for any A ∈ F t {\displaystyle A\in {\mathcal {F}}_{t}} (and the 1 {\displaystyle 1} function is the indicator function ). [ 1]
Existence and uniqueness Let g : [ 0 , T ] × R m × R m × d → R m {\displaystyle g:[0,T]\times \mathbb {R} ^{m}\times \mathbb {R} ^{m\times d}\to \mathbb {R} ^{m}} satisfy:
g ( ⋅ , y , z ) {\displaystyle g(\cdot ,y,z)} is an F t {\displaystyle {\mathcal {F}}_{t}} -adapted process for every ( y , z ) ∈ R m × R m × d {\displaystyle (y,z)\in \mathbb {R} ^{m}\times \mathbb {R} ^{m\times d}} ∫ 0 T | g ( t , 0 , 0 ) | d t ∈ L 2 ( Ω , F T , P ) {\displaystyle \int _{0}^{T}|g(t,0,0)|\,dt\in L^{2}(\Omega ,{\mathcal {F}}_{T},\mathbb {P} )} the L2 space (where | ⋅ | {\displaystyle |\cdot |} is a norm in R m {\displaystyle \mathbb {R} ^{m}} )g {\displaystyle g} is Lipschitz continuous in ( y , z ) {\displaystyle (y,z)} , i.e. for every y 1 , y 2 ∈ R m {\displaystyle y_{1},y_{2}\in \mathbb {R} ^{m}} and z 1 , z 2 ∈ R m × d {\displaystyle z_{1},z_{2}\in \mathbb {R} ^{m\times d}} it follows that | g ( t , y 1 , z 1 ) − g ( t , y 2 , z 2 ) | ≤ C ( | y 1 − y 2 | + | z 1 − z 2 | ) {\displaystyle |g(t,y_{1},z_{1})-g(t,y_{2},z_{2})|\leq C(|y_{1}-y_{2}|+|z_{1}-z_{2}|)} for some constant C {\displaystyle C} Then for any random variable X ∈ L 2 ( Ω , F t , P ; R m ) {\displaystyle X\in L^{2}(\Omega ,{\mathcal {F}}_{t},\mathbb {P} ;\mathbb {R} ^{m})} there exists a unique pair of F t {\displaystyle {\mathcal {F}}_{t}} -adapted processes ( Y , Z ) {\displaystyle (Y,Z)} which satisfy the stochastic differential equation. [ 2]
In particular, if g {\displaystyle g} additionally satisfies:
g {\displaystyle g} is continuous in time (t {\displaystyle t} )g ( t , y , 0 ) ≡ 0 {\displaystyle g(t,y,0)\equiv 0} for all ( t , y ) ∈ [ 0 , T ] × R m {\displaystyle (t,y)\in [0,T]\times \mathbb {R} ^{m}} then for the terminal random variable X ∈ L 2 ( Ω , F t , P ; R m ) {\displaystyle X\in L^{2}(\Omega ,{\mathcal {F}}_{t},\mathbb {P} ;\mathbb {R} ^{m})} it follows that the solution processes ( Y , Z ) {\displaystyle (Y,Z)} are square integrable. Therefore E g [ X | F t ] {\displaystyle \mathbb {E} ^{g}[X|{\mathcal {F}}_{t}]} is square integrable for all times t {\displaystyle t} . [ 3]
This page is based on this
Wikipedia article Text is available under the
CC BY-SA 4.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.