aa r X i v : . [ m a t h . P R ] S e p NUMERICAL SCHEMES FOR G –EXPECTATIONS YAN DOLINSKYDEPARTMENT OF MATHEMATICSETH, ZURICHSWITZERLAND
Abstract.
We consider a discrete time analog of G –expectations and we provethat in the case where the time step goes to 0 the corresponding values convergeto the original G –expectation. Furthermore we provide error estimates for theconvergence rate. This paper is continuation of [4]. Our main tool is a strongapproximation theorem which we derive for general discrete time martingales. Introduction
In this paper we study numerical schemes for G –expectations, which were intro-duced recently by Peng (see [7] and [8]). A G –expectation is a sublinear functionwhich maps random variables on the canonical space Ω := C ([0 , T ]; R d ) to the realnumbers. The motivation to study G –expectations comes from mathematical fi-nance, in particular from risk measures (see [6] and [9]) and pricing under volatilityuncertainty (see [2] ,[6] and [12]).Our starting point is the dual view on G –expectation via volatility uncertainty(see [1]), which yields the representation ξ → sup P ∈P E P [ ξ ] where P is the set ofprobabilities on C ([0 , T ]; R d ) such that under any P ∈ P , the canonical process B is a martingale with volatility d h B i /dt taking values in a compact convex subset D ⊂ S d + of positive definite matrices. Thus the set D can be understood as thedomain of (Knightian) volatility uncertainty and the functional above representsthe European option (with reward ξ ) super–hedging price. For details see ([2] and[6]).In the current work we assume that ξ is of the form F ( B, h B i ) where F is a path–dependent functional which satisfies some regularity conditions. In particular, ξ canrepresent an award of path dependent European contingent claim. In this case thereward is a functional of the stock price which is equal to the Doolean exponentialof the canonical process, and so quadratic variation appears naturally.In [4], the authors introduced a volatility uncertainty in discrete time and ananalog of the Peng G –expectation. They proved that the discrete time values con-verge to the continuous time G –expectation. The main tools that were used thereare the weak convergence machinery together with a randomization technique. Themain disadvantage of the weak convergence approach is that it can not provide er-ror estimates. In order to obtain error estimates we should consider all the market Date : November 11, 2018.2000
Mathematics Subject Classification.
Key words and phrases. G -expectations, volatility uncertainty, strong approximationtheorems.I am very grateful to M.Nutz, H.M.Soner and O.Zeitouni for valuable discussions. models on the same probability space, and so methods based on strong approxima-tion theorems come into picture. In this paper we consider a bit different (than in[4]) discrete time analog of G –expectation and prove that in a case where the timestep goes to 0 the corresponding values converge to the original G –expectation.Furthermore, by deriving a strong invariance principle for general discrete timemartingales, we are able to provide error estimates for the convergence rate.The paper is organized as following. In the next section we introduce the setupand formulate the main results. In Section 3 we present the main machinery whichwe are use, namely we obtain a strong approximation theorem for general martin-gales. In Section 4 we derive auxiliary lemmas that we use for the proof of the mainresults. In Section 5 we complete the proof of Theorems 2.2–2.3.2. Preliminaries and main results
We fix the dimension d ∈ N and denote by || · || the sup Euclidean norm on R d .Moreover, we denote by S d the space of d × d symmetric matrices and by S d + itssubset of nonnegative definite matrices. Consider the space S d with the operatornorm || A || = sup || v || =1 || A ( v ) || . We fix a nonempty, convex and compact set D ⊂ S d + ; the elements of D will be the possible values of our volatility process. Denoteby Ω = C ([0 , T ]; R d ) and Γ = C ([0 , T ]; S d ), the spaces of continuous functionswith values in R d and S d , respectively. We consider these spaces with the sup norm || x || = sup ≤ t ≤ T || x t || . Let F : Ω × Γ → R be a function which satisfies the followingassumption. There exits constants H , H > | F ( u , v ) − F ( u , v ) | ≤ H exp ( H ( || u || + || u || + || v || + || v || )) × (2.1) ( || u − u || + || v − v || ) , u , u ∈ Ω , v , v ∈ Γ . Without loss of generality we assume that the maturity date T = 1. We denote by B = ( B t ) ≤ t ≤ the canonical process (on the space Ω) B t ( ω ) = ω t , ω ∈ Ω and by F t := σ ( B s , ≤ s ≤ t ) the canonical filtration. A probability measure P on Ω iscalled a martingale law if B is a P -martingale (with respect to the filtration F t )and B = 0 P -a.s. (all our martingales start at the origin). We set(2.2) P D = { P martingale law on Ω : d h B i /dt ∈ D , P × dt a.s. } , observe that under any measure P ∈ P D the stochastic processes B and h B i , arerandom elements in Ω and Γ, respectively. Consider the G –expectation(2.3) V = sup P ∈P D E P F ( B, h B i )where E P denotes the expectation with respect to P . A measure P ∈ P D will becalled ǫ –optimal if(2.4) V < ǫ + E P F ( B, h B i ) . Our goal is to find discrete time approximations for V . The advantage of discretetime approximations is that the corresponding values can be calculated by dynam-ical programming. Furthermore, we will apply these approximations in order tofind ǫ –optimal measures in the continuous time setting. Remark 2.1.
Let S = { ( S t , ..., S dt ) } t =0 be the Doolean’s exponential E ( B ) of thecanonical process B , namely S it = S i exp (cid:0) B it − h B i i t (cid:1) , i ≤ d , t ∈ [0 , . Thestochastic process S represents the stock prices in a financial model with volatilityuncertainty. Clearly any random variable of the from g ( S ) where g : C ([0 , T ]; R d ) → pproximations of G –Expectations 3 R + is a Lipschitz continuous function, can be written in the form g ( S ) = F ( B, h B i ) for a suitable F which satisfies (2.1). Thus we see that our setup includes payoffswhich correspond to path dependent European options. Next, we formulate the main approximation results. Let ν be a distribution on R d which satisfies the following(2.5) Z R d xdν ( x ) = 0 and Z R d x i x j dν ( x ) = δ ij , ≤ i, j ≤ d where δ ij is the Kronecker–Delta. Furthermore, we assume that the moment gen-erating function ψ ν ( y ) := R x ∈ R d exp( P di =1 x i y i ) dν ( x ) < ∞ exists for any y ∈ R d and, for any compact set K ⊂ R d (2.6) sup n ∈ N sup y ∈ K ψ nν (cid:18) y √ n (cid:19) < ∞ . Observe that the standard d –dimensional normal distribution ν = N (0 , I ) is satis-fying the assumptions (2.5)–(2.6).Let n ∈ N and Y , ..., Y n be a sequence of i.i.d. random vectors with L ( Y ) = ν ,i.e., the distribution of the random vectors is ν . We denote by A νn the set of all d –dimensional stochastic process M = ( M , ..., M n ) of the form, M = 0 and(2.7) M i = i X j =1 √ n φ j ( Y , ..., Y j − ) Y j , ≤ i ≤ n where φ j : ( R d ) j − → √ D := {√ A : A ∈ D } and Y , ..., Y n are column vectors. Asusual for a matrix A ∈ S d + we denote by √ A the unique square root in S d + . Observethat M is a martingale under the filtration which is generated by Y , ..., Y n . Let h M i be the ( S d + valued) predictable variation of M . In view of (2.5) we get(2.8) h M i k = 1 n k X j =1 φ j ( Y , ..., Y j − ) , ≤ k ≤ n and we set h M i = 0. Let W n : ( R d ) n +1 × ( S d ) n +1 → Ω × Γ be the linear interpo-lation operator given by W n ( u, v )( t ) := ([ nt ] + 1 − nt ) ( u [ nt ] , v [ nt ] ) + ( nt − [ nt ]) ( u [ nt ]+1 , v [ nt ]+1 ) , t ∈ [0 , u = ( u , u , ..., u n ), v = ( v , v , ..., v n ) and [ z ] denotes the integer part of z .Set(2.9) V νn = sup M ∈A νn E F ( W n ( M, h M i )) , we denote by E the expectation with respect to the underlying probability measure.The following theorem which will be proved in Section 5 is the main result of thepaper. Theorem 2.2.
For any ǫ > there exists a constant C ǫ = C ǫ ( ν ) which dependsonly on the distribution ν such that (2.10) | V νn − V | ≤ C ǫ n ǫ − / , ∀ n ∈ N . Furthermore, if the function F is bounded, then there exists a constant C = C ( ν ) for which (2.11) | V νn − V | ≤ C n − / , ∀ n ∈ N . Y.Dolinsky
Next, we describe a dynamical programming algorithm for V νn and for the op-timal control, which in general should not be unique. Fix n ∈ N and define asequence of functions J ν,nk : ( R d ) k +1 × ( S d ) k +1 → R , k = 0 , , ..., n by the backwardrecursion J ν,nn ( u , u , ..., u n , v , v , ..., v n ) = F ( W n ( u, v )) and(2.12) J ν,nk ( u , u , ..., u k , v , v , ..., v k ) =sup γ ∈√ D E (cid:16) J ν,nk +1 (cid:16) u , u , ..., u k , u k + γY k +1 √ n , v , v , ..., v k , v k + γ n (cid:17)(cid:17) sup γ ∈√ D R R d J ν,nk +1 (cid:16) u , u , ..., u k , u k + γx √ n , v , v , ..., v k , v k + γ n (cid:17) dν ( x )for k = 0 , , ..., n − . From (2.1) and (2.6) it follows that there exists a constant ˆ H such that J ν,nk ( u , ..., u k , v , ..., v k ) ≤ ˆ H exp (cid:0) ( H +1) k X i =0 ( || u i || + || v i || ) (cid:1) , ∀ k, u , ..., u k , v , ..., v k . Fix k . By applying (2.6) again we conclude that for any compact sets K ⊂ R d and K ⊂ S d + , the family of random variables J ν,nk +1 (cid:16) u , ..., u k , u k + γY k +1 √ n , v , ..., v k , v k + γ n (cid:17) ,γ ∈ √ D , u , ..., u k ∈ K , v , ..., v k ∈ K is uniformly integrable. This together with the fact that the set D is compact gives(by backward induction) that for any k , the function J ν,nk is continuous. Thus wecan introduce the functions h ν,nk : ( R d ) k +1 × ( S d ) k +1 → √ D , k = 0 , , ..., n − h ν,nk ( u , ..., u k , v , ..., v k ) =(2.13) arg max γ ∈√ D R R d J ν,nk +1 (cid:16) u , u , ..., u k , u k + γx √ n , v , v , ..., v k , v k + γ n (cid:17) dν ( x ) . Finally, define by induction the stochastic processes { M ν,nk } nk =0 and { N ν,nk } nk =0 ,with values in R d and S d , respectively by M ν,n = 0, N ν,n = 0 and for k < nN ν,nk +1 = N ν,nk + n ( h ν,nk ( M ν,n , ..., M ν,nk , N ν,n , ..., N ν,nk )) (2.14) and M ν,nk +1 = M ν,nk + √ n h ν,nk ( M ν,n , ..., M ν,nk , N ν,n , ..., N ν,nk ) Y k +1 . Observe that M ν,n ∈ A νn and N ν,n = h M ν,n i . From the dynamical programmingprinciple it follows that(2.15) V νn = J ν,n (0 ,
0) = E F ( W n ( M ν,n , h M ν,n i )) . In the following theorem (which will be proved in Section 5) we provide an explicitconstruction of ǫ –optimal measures for the G –expectation which is defined in (2.3). Theorem 2.3.
Let (Ω W , F W , P W ) be a complete probability space together witha standard d -dimensional Brownian motion { W t } t ∈ [0 , and its natural filtration F Wt = σ { W ( s ) | s ≤ t } . Consider the standard normal distribution ν g = N (0 , I ) .For any n ∈ N , let f n : ( R d ) n → R be a function which is satisfying f n ( Y g , ..., Y gn ) = M ν g ,nn , where Y g , ..., Y gn are i.i.d. and L ( Y g ) = ν g . Observe that f n can be calcu-lated from (2.12)–(2.14). Define the continuous stochastic process { M nt } t =0 by (2.16) M nt = E W (cid:16) f n (cid:16) √ nW n , √ n ( W n − W n ) , ..., √ n ( W − W n − n ) (cid:17) |F Wt (cid:17) , t ∈ [0 , pproximations of G –Expectations 5 where E W denotes the expectation with respect to P W . Let P n be the distributionof M n on the canonical space Ω . Then P n ∈ P D , and for any ǫ > there exists aconstant ˜ C ǫ such that (2.17) V < E n F ( B, h B i ) + ˜ C ǫ n ǫ − / , ∀ n where E n denotes the expectation with respect to P n . If the function F is boundedthen there exists a constant ˜ C for which (2.18) V < E n F ( B, h B i ) + ˜ C n − / , ∀ n. The main tool
In this section we derive a strong approximation theorem (Lemma 3.2) which isthe main tool in the proof of Theorems 2.2–2.3. This theorem is an extension ofthe main result in [11].For any two distributions ν , ν on the same measurable space ( X , B ) we definethe distance in variation(3.1) ρ ( ν , ν ) = sup B ∈B | ν ( B ) − ν ( B ) | . First we state some results (without a proof) from [11] (Lemmas 4.5 and 7.2 in[11]) that will be used in the proof of Lemma 3.2.
Lemma 3.1. i. There exists a distribution µ on R d which is supported on the set ( − / , / d and has the following property. There exists a constant C > such that for anydistributions ν , ν on R d which satisfy R R d xdν ( x ) = R R d xdν ( x ) and for ≤ i, j ≤ d (3.2) R R d x i x j dν ( x ) = R R d x i x j dν ( x ) we have (3.3) ρ ( ν ∗ µ, ν ∗ µ ) ≤ C (cid:18)Z R d || x || dν ( x ) + Z R d || x || dν ( x ) (cid:19) where ν ∗ µ denotes the convolution of the measures ν and µ .ii. Let ( ˜Ω , ˜ F , ˜ P ) be a probability space together with a d –dimensional random vector Y , a m –dimensional random vector Z ( m is some natural number), and a randomvariable α which is distributed uniformly on the interval [0 , and independent of Y and Z . Let ν be a distribution on R d and let ˆ ν be a distribution on R m × R d such that ˆ ν ( A × R d ) = ˜ P ( Z ∈ A ) for any A ∈ B ( R m ) , i.e. a marginal distributionof ˆ ν on R m is equals to L ( Z ) . There exists a measurable function Φ = Φ ν, ˆ ν, L ( Z,Y ) : R m × R d × [0 , → R d × R d such that for the vector (3.4) ( U, X ) := Φ(
Z, Y, α ) we have the following: L ( U ) = ν , L ( Z, X ) = ˆ ν , U is independent of X, Z and (3.5) ˜ P ( U + X = Y | Z ) = ρ ( L ( U ) ∗ L ( X | Z ) , L ( Y | Z )) . Now we are ready to prove the main result of this section. For any stochasticprocess Z = { Z k } nk =0 we denote ∆ Z k := Z k − Z k − for k ≥
0, where we set Z − = 0.Fix n ∈ N and consider a d –dimensional martingale { M k } nk =0 with respect to its Y.Dolinsky natural filtration, which satisfies M = 0. For any 1 ≤ k ≤ n , let φ k : ( R d ) k → S d be a measurable map such that p ∆ h M i k = q E (cid:0) ∆ M k ∆ M ′ k (cid:12)(cid:12) σ { M , M , ..., M k − } (cid:1) =(3.6) φ k (∆ M , ∆ M , ..., ∆ M k − ) , where {h M i k } nk =0 is the predictable variation ( S d + valued) of M and the symbol · ′ denotes transposition. We assume that there exists a constant H for which(3.7) E (cid:0) || ∆ M k || (cid:12)(cid:12) σ { M , ..., M k − } (cid:1) + || p ∆ h M i k || ≤ H, a.s. ∀ k. Lemma 3.2.
Let ν a distribution on R d such that R R d xdν ( x ) = 0 , R R d x i x j dν ( x ) = δ ij ∀ i, j ≤ d (3.8) and R R d || x || dν ( x ) < ∞ . For any Θ > its possible to construct the martingale { M k } nk =0 on some probabilityspace ( ˜Ω , ˜ F , ˜ P ) together with a sequence of i.i.d. random vectors Y , ..., Y n with thefollowing properties:i. L ( Y ) = ν .ii. For any k , the random vectors M , ..., M k − are independent of Y k .iii. There exists a constant C = C ( ν ) which depends only on the distribution ν such that (3.9) ˜ P max ≤ k ≤ n || M k − k X j =1 q ∆ h M i j Y j || > Θ ≤ C Hn Θ . Proof.
Fix Θ >
0. For any k let ν k be the distribution of the random vector (∆ M , ..., ∆ M k ). Let ( ˜Ω , ˜ F , P ) be a probability space which contains a se-quence of i.i.d. random vectors Y , ..., Y n such that L ( Y ) = ν , a sequence ofi.i.d. random variables α , ..., α n which are distributed uniformly on the interval[0 ,
1] and independent of Y , ..., Y n , and a random vector U which is independentof Y , ..., Y n , α , ..., α n and satisfies L ( U ) = µ , where the distribution µ is definedin the first part of Lemma 3.1. Define the sequences { X i } ni =0 and { U i } ni =1 by thefollowing recursive relations, X = 0 and(3.10)( U k , X k ) = Ψ µ,ν k , ˆ ν k ( X , ..., X k − , U k − + 1Θ φ k (Θ X , ..., Θ X k − ) Y k , α k ) , ≤ k ≤ n where ˆ ν k is the distribution of ( X , ..., X k − , U k − + φ k (Θ X , ..., Θ X k − ) Y k ).From the definition of the map Ψ it follows (by induction) that L (Θ X , ..., Θ X n ) = L (∆ M , ..., ∆ M n ). We conclude that the the stochastic process Θ P ki =0 X i , 0 ≤ k ≤ n is distributed as { M k } nk =0 , and so we set,(3.11) M k = Θ k X i =0 X i , ≤ k ≤ n. Let 1 ≤ k ≤ n . From (3.10)–(3.11) and the fact that Y k is independent of Y , ..., Y k − , α , ..., α k − it follows that Y k is independent of M , ..., M k − . Thusin order to complete the proof, it remains to establish (3.9). Set δ k = U k + X k − U k − − φ k (Θ X , ..., Θ X k − ) Y k , and ρ k ( x , ..., x k − )(3.12) = ˜ P ( δ k = 0 | X = x , ..., X k − = x k − ) , x , ..., x k − ∈ R d ≤ k ≤ n. pproximations of G –Expectations 7 From the properties of the map Ψ it follows that for any k , U k is independent of X , ..., X k and L ( U k ) = µ . This together with (3.5) and (3.10) gives ρ k ( x , ..., x k − ) = ρ (cid:0) L ( X k | X = x , ..., X k − = x k − ) ∗ µ, (3.13) L ( φ (Θ x , ..., Θ x k − ) Y k ) ∗ µ (cid:1) x , ..., x k − ∈ R d , ≤ k ≤ n. From (3.3), (3.6)–(3.7), (3.11) and (3.13)(3.14) ρ k ( x , ..., x k − ) ≤ C H Θ , x , ..., x k − ∈ R d , ≤ k ≤ n for some constant C = C ( ν ) which depends only on the distribution ν . From(3.11)–(3.12), (3.14) and the fact that max ≤ k ≤ n || U k || < a.s. we obtain˜ P (cid:16) max ≤ k ≤ n || M k − P kj =1 p ∆ h M i j Y j || > Θ (cid:17) =˜ P (cid:16) max ≤ k ≤ n || M k − P k − j =0 φ j (∆ M , ..., ∆ M j ) Y j +1 || > Θ (cid:17) =˜ P (cid:16) max ≤ k ≤ n Θ || P ki =1 δ i + U − U k || > Θ (cid:17) ≤ P ni =1 ˜ P ( δ i = 0) ≤ C Hn Θ and we conclude the proof. (cid:3) Auxiliary lemmas
In this section we derive several estimates which are essential for the proof ofTheorem 2.2–2.3. We start with the following general result.
Lemma 4.1.
Let { M t } t =0 be a one dimensional continuous martingale which sat-isfies d h M i t dt ≤ H a.s. for some constant H . Consider the discrete time martingale N k = M k/n , ≤ k ≤ n together with its predictable variation process {h N i k } nk =0 which is given by h N i = 0 and h N i k = k X i =1 E ((∆ N i ) | σ { N , ..., N i − } ) , ≤ k ≤ n. There exists constants C , C (which depend only on H such that) (4.1) E (cid:18) max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n | M t − N k | (cid:19) ≤ C n and (4.2) E (cid:18) max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n |h M i t − h N i k | (cid:19) ≤ C √ n . Proof.
From the Burkholder–Davis–Gundy inequality it follows that there exists aconstant c such that E (cid:0) max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n | M t − N k | (cid:1) ≤ (4.3) P n − k =0 E (cid:0) max k/n ≤ t ≤ ( k +1) /n | M t − M k/n | (cid:1) ≤ c P n − k =0 E (cid:0) |h M i ( k +1) /n − h M i k/n | (cid:1) ≤ c n H n = c H n this completes the proof of (4.1). Next, we prove (4.2). Define the optional variationof the martingale { N k } nk =0 by [ N ] = 0 and(4.4) [ N ] k = k X i =1 (∆ N i ) , ≤ k ≤ n. Y.Dolinsky
From the relation E (∆[ N ] k | σ { N , ..., N k − } ) = ∆ h N i k and the Doob–Kolmogorovinequality we obtain E (cid:0) max ≤ k ≤ n | [ N ] k − h N i k | (cid:1) ≤ E (cid:0) | [ N ] n − h N i n | (cid:1) =(4.5) 4 E (cid:0) | P ni =1 ∆[ N ] i − ∆ h N i i | (cid:1) = 4 P ni =1 E (cid:0) | ∆[ N ] i − ∆ h N i i | (cid:1) ≤ P ni =1 E ((∆[ N ] i ) ) = 4 P ni =1 E (cid:0) | M i/n − M ( i − /n | (cid:1) ≤ c H n where the last inequality follows from the Burkholder–Davis–Gundy inequality.Next, observe that(4.6) [ N ] k = N k − k − X i =1 N i ( N i +1 − N i ) = N k − Z k/n N [ nt ] dM t , ≤ k ≤ n. From the Doob–Kolmogorov inequality and Ito’s Isometry we get E (cid:16) sup ≤ u ≤ (cid:12)(cid:12)R u ( M t − N [ nt ] ) dM t (cid:12)(cid:12) (cid:17) ≤ (4.7) 4 E (cid:16) | R ( M t − N [ nt ] ) dM t | (cid:17) = 4 E (cid:16)R ( M t − N [ nt ] ) d h M i t (cid:17) ≤ H E (cid:0) max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n | M t − N k | (cid:1) ≤ H√C √ n , the last inequality follows from (4.1) and Jensen’s inequality. From (4.6)–(4.7) andthe equality 2 R k/n M t dM t = N k − h M i k/n it follows that E (cid:18) max ≤ k ≤ n | [ N ] k − h M i k/n | (cid:19) ≤ H√C √ n . This together with (4.5) and the inequality ( a + b + c ) ≤ a + b + c ) yields E (cid:0) max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n |h M i t − h N i k | (cid:1) ≤ (4.8) H n + 4 E (cid:0) max ≤ k ≤ n | [ N ] k − h M i k/n | (cid:1) + 4 E (cid:0) max ≤ k ≤ n | [ N ] k − h N i k | (cid:1) ≤ H n + H√C √ n + c H n and the proof is completed. (cid:3) Next, we apply the above lemma in order to derive some estimates in our setup.
Lemma 4.2.
Let n ∈ N and P ∈ P D . Consider the d –dimensional martingale N k = B k/n , ≤ k ≤ n together with its predictable variation {h N i k } nk =0 , under themeasure P . There exists a constant C (which is independent of n and P ) such that (4.9) E P (cid:0) ||W n ( N ) − B || (cid:1) ≤ C √ n and (4.10) E P (cid:0) ||W n ( h N i ) − h B i|| (cid:1) ≤ C √ n . In the equations (4.9) and (4.10), W n is the linear interpolation operator which isdefined on the spaces ( R d ) n +1 and ( S d ) n +1 , respectively.Proof. Inequality (4.9) follows immediately from (4.1) and the relation ||W n ( N ) − B || ≤ d X i =1 max ≤ k ≤ n max k/n ≤ t ≤ ( k +1) /n | N ik − B it | . pproximations of G –Expectations 9 Next, we prove (4.10). For any 1 ≤ i, j ≤ d denote by h N i i,jk and h B i i,jt , the i − throw and the j − th column of the matrices h N i k and h B i t , respectively. Notice that h B i i,jt = ( h B i + B j i t − h B i i t − h B j i t ) and h N i i,jk = ( h N i + N j i k − h N i i k − h N j i k ).Thus (4.10) follows from (4.2) and the inequality ||W n ( h N i ) − h B i|| ≤ d X i =1 d X j =1 max ≤ k ≤ n − max k/n ≤ t ≤ ( k +1) /n |h N i i,jk − h B i i,jt | . (cid:3) We conclude this section with the following technical lemma.
Lemma 4.3.
Let
A > . Then we have:i. (4.11) sup P ∈P D E P exp( A sup ≤ t ≤ || B t || ) < ∞ . ii. Let n ∈ N and ν be a distribution which satisfies (2.5)–(2.6). Consider a filteredprobability space ( ˜Ω , ˜ F , { ˜ F k } nk =0 , ˜ P ) together with a sequence of i.i.d. random vec-tors Y , ..., Y n which satisfy L ( Y ) = ν . Assume that for any i , Y i is ˜ F i measurableand independent of ˜ F i − . Let { M k } nk =0 be a d –dimensional stochastic process ofthe following form: M = 0 and (4.12) M k = r n k X i =1 γ i Y i , ≤ k ≤ n where for any i , γ i is ˜ F i − measurable random matrix, which takes values in √ D .There exists a constant C (which may depend on A and ν ) such that (4.13) exp (cid:18) A max ≤ k ≤ n || M k || (cid:19) < C . Proof. i. Let P ∈ P D . From the Novikov condition it follows that for any 1 ≤ i ≤ d and a ∈ R , E P exp (cid:16) aB i − a h B i i (cid:17) = 1. Thus E P (cid:0) exp( a | B i | ) (cid:1) ≤ E P (exp( aB i )) + E P (exp( − aB i )) ≤ (cid:18) a || D || (cid:19) where || D || = sup D∈ D ||D|| . This together with the Cauchy–Schwartz inequalitycompletes the proof of (4.11).ii. Consider the compact set K := { x ∈ R d : || x || ≤ ||√ D ||} . Clearly, the rowsof the matrices γ j , 1 ≤ j ≤ n are in K . Fix 1 ≤ i ≤ d and consider the i − thcomponent of the process M , namely we consider the process ( M i , ..., M in ). From(4.12) we get that for any a ∈ RE (cid:16) exp( a ( M ik − M ik − )) | ˜ F k − (cid:17) ≤ sup y ∈ K ψ ν (cid:18) ay √ n (cid:19) where ψ ν is the function which is defined below (2.5). This together with (2.6)gives(4.14) E (cid:0) exp( aM in ) (cid:1) ≤ sup n ∈ N sup y ∈ K ψ nν (cid:18) ay √ n (cid:19) < ∞ . From the inequality E exp( | aM in | ) ≤ E (cid:0) exp( aM in ) (cid:1) + E (cid:0) exp( − aM in ) (cid:1) and the Cauchy–Schwartz inequality it follows that there exists a constant c (which may dependon A and ν ) such that(4.15) E (exp( A || M n || )) < c . Finally, since for any i the process M ik , k ≤ n is a martingale with respect to thefiltration { ˜ F k } nk =0 we conclude that the stochastic process { exp( A || M k || / } nk =0 is a sub–martingale and so, from (4.15) and the Doob–Kolomogorv inequality E exp( A max ≤ k ≤ n || M k || ) ≤ c and the proof is completed. (cid:3) Proof of the main results
In this section we complete the proof of Theorems 2.2–2.3. Let ν be a distri-bution which satisfies (2.5)–(2.6). Fix ǫ >
0. We start with proving the followingstatements(5.1) V νn > V − C ǫ n ǫ − / , ∀ n ∈ N and for a bounded F (5.2) V νn > V − C n − / , ∀ n ∈ N . Choose n ∈ N and δ >
0. There exists a measure Q ∈ P D for which(5.3) V < δ + E Q F ( B, h B i ) . Consider the stochastic process N k = B k/n , 0 ≤ k ≤ n together with its predictablevariation {h N i k } nk =0 . From the fact that D is a convex compact set we obtain thatthere exists a sequence of functions ψ j : ( R d ) j → √ D , 1 ≤ j ≤ n such that p ∆ h N i k = q E (cid:0) ∆ N k ∆ N ′ k (cid:12)(cid:12) σ { N , N , ..., N k − } (cid:1) =(5.4) √ n ψ k ( N , ..., N k − ) , ∀ k a.s.From the Burkholder–Davis–Gundy inequality it follows that there exists a constant c for which(5.5) E Q (cid:0) || ∆ N k || (cid:12)(cid:12) σ { N , ..., N k − } (cid:1) ≤ c n − / , ∀ k a.s.By applying (2.1), Lemmas 4.2–4.3 and Cauchy–Schwartz inequality we get(5.6) E Q | F ( B, h B i ) − F ( W n ( N ) , W n ( h N i )) | ≤ c n − / for some constant c (which depends only on the distribution ν ). From (5.5) andLemma 3.2 we obtain that there exists a probability space ( ˜Ω , ˜ F , ˜ P ) which con-tains the martingale N , a sequence of i.i.d. random vectors Y , ..., Y n and satisfies, L ( Y ) = ν , for any k the random vectors N , ..., N k − are independent of Y k , and(5.7) ˜ P max ≤ k ≤ n || N k − k X j =1 q ∆ h N i j Y j > n − / < c n − / nn − / = c n − / for some constant c which depends only on the distribution ν . Denote M k = P kj =1 p ∆ h N i j Y j , 1 ≤ k ≤ n and A = { max ≤ k ≤ n || N k − M k || > n − / } . From(2.5) and the fact that N , ..., N k − are independent of Y k we obtain that M isa martingale, and h M i = h N i . Thus from (2.1), (5.7), Lemma 4.3, the Markov pproximations of G –Expectations 11 inequality and the Holder inequality (for p = − ǫ and q = ǫ ) we get that thereexists constants c , c which depend on ǫ and ν such that˜ E | F ( W n ( N ) , W n ( h N i )) − F ( W n ( M ) , W n ( h M i )) | ≤ (5.8) H ˜ E (cid:18) exp ( H (max ≤ k ≤ n || M || k + max ≤ k ≤ n || N || k + 2 || D || )) × (cid:0) n − / + I A ( ||W n ( N ) || + ||W n ( M ) || ) (cid:1) (cid:19) ≤≤ c ( n − / + ˜ P ( A ) − ǫ ) ≤ c n ǫ − / where we set I A = 1 if an event A occurs and I A = 0 if not, and ˜ E denotes theexpectation with respect to ˜ P . If the function F is bounded, say F ≤ R , then wehave ˜ E | F ( W n ( N ) , W n ( h N i )) − F ( W n ( M ) , W n ( h M i )) | ≤ R ˜ P ( A ) + H n − / (5.9) × ˜ E (exp( H (max ≤ k ≤ n || M || k + max ≤ k ≤ n || N || k + 2 || D || ))) ≤ c n − / for some constant c which depends only on ν . Since δ > V νn ≥ ˜ EF ( W n ( M ) , W n ( h M i )) . Define a sequence of functions L k : ( R d ) k +1 × ( S d ) k +1 → R , k = 0 , , ..., n by thebackward recursion L n ( u , ..., u n , v , ..., v n ) = F ( W n ( u, v )) and(5.11) L k ( u , ..., u k , v , ..., v k ) =˜ EL k +1 (cid:0) u , ..., u k , u k + √ n ψ k +1 ( u , ..., u k ) Y k +1 , v , ..., v k ,v k + n ψ k +1 ( u , ..., u k ) (cid:1) for k = 0 , , ..., n − . From the fact that Y k +1 is independent of Y , ..., Y k , N , ..., N k − it follows (bybackward induction) that for any k ,˜ E ( F ( W n ( M ) , W n ( h M i )) | σ { N , ..., N k − , Y , ..., Y k } ) =(5.12) L k ( M , ..., M k , h N i , ..., h N i k ) . Finally, from (2.12), (5.11)–(5.12) and the fact that ψ k takes values in √ D for any k , we obtain (by backward induction) that L k ≤ J ν,nk , k ≤ n , and in particular(5.13) V νn = J ν,n (0 , ≥ L (0 ,
0) = ˜ EF ( W n ( M ) , W n ( h M i )) . This competes the proof of (5.1)–(5.2). Next, fix n ∈ N , a distribution ν whichsatisfies (2.5)–(2.6) and consider the optimal control M ν,n which is given by (2.12)–(2.14). By applying Lemma 3.2 for the standard normal distribution ν g it followsthat there exists a probability space ( ˜Ω , ˜ F , ˜ P ) which contains the martingale M ν,n ,a sequence of i.i.d. standard Gaussian random vectors ( d –dimensional) Y g , ..., Y gn such that for any k the random vectors M ν,n , ..., M ν,nk − are independent of Y gk , and(5.14) ˜ P max ≤ k ≤ n || M ν,nk − k X j =1 q ∆ h M ν,n i j Y gj || > n − / < c n − /
82 Y.Dolinsky for some constant c . Denote ˆ M k = P kj =1 p ∆ h M ν,n i j Y gj , 1 ≤ k ≤ n . Observethat h ˆ M i = h M ν,n i . Thus by using similar argument to those as in (5.8)–(5.9) weobtain that there exists constants c , c such that | ˜ EF ( W n ( ˆ M ) , W n ( h ˆ M i )) − V νn | ≤ (5.15) ˜ E | F ( W n ( ˆ M ) , W n ( h ˆ M i )) − F ( W n ( M ν,n ) , W n ( h M ν,n i )) | ≤ c n ǫ − / and if the function F is bounded, | ˜ EF ( W n ( ˆ M ) , W n ( h ˆ M i )) − V νn | ≤ (5.16) ˜ E | F ( W n ( ˆ M ) , W n ( h ˆ M i )) − F ( W n ( M ν,n ) , W n ( h M ν,n i )) | ≤ c n − / . By applying similar arguments to those as in (5.11)–(5.13) we conclude that(5.17) V ν g n = J ν g ,n (0 , ≥ ˜ EF ( W n ( ˆ M ) , W n ( h ˆ M i )) . Next, let z k : ( R d ) k → √ D , 1 ≤ k ≤ n − ≤ k ≤ n − z k ( Y g , ..., Y gk ) = h ν g ,nk ( M ν g ,n , ..., M ν g ,nk , N ν g ,n , ..., N ν g ,nk ) , where the terms M ν g ,n , N ν g ,n are given by (2.12)–(2.14). From the martingalerepresentation theorem if follows that the martingale M n which is defined by (2.16)equals to M nt = h ν g ,n (0 , W t + I t> /n × R t /n z [ nu ] ( √ nW /n , √ n ( W /n − W /n ) , ..., √ n ( W [ nu ] − W [ nu ] − )) dW u , t ∈ [0 , P n ∈ P D . As in (5.6) we have(5.18) E n | F ( B, h B i ) − F ( W n ( N ) , W n ( h N i )) | ≤ c n − / where N k = B k/n , 0 ≤ k ≤ n . Finally, observe that the distribution of N under P n equals to the distribution of the martingale M ν g ,n . Thus from (2.15) and (5.18) weconclude that V ≥ E P n F ( B, h B i ) ≥ V ν g n − c n − / . This together with (5.1)–(5.2) and (5.15)–(5.17) completes the proof of Theorems2.2–2.3. (cid:3)
References [1] L.Denis, M.Hu, and S.Peng.
Function spaces and capacity related to a sublinear expec-tations: applications to G –Brownian motion paths, Potential Anal., (2011) 139–161.[2] L.Denis and C.Martini, A Theoretical Framework for the Pricing of Contingent Claimsin the Presence of Model Uncertainty,
Ann. Appl. Probab. (2006), 827–852.[3] S.Deparis and C.Martini, Superhedging Strategies and Balayage in Discrete Time,
Pro-ceedings of the 4th Ascona Conferecne on Stochastic Analysis, Random Fields and Ap-plications. (2004), 205–219, Birkh¨auser.[4] Y.Dolinsky, M.Nutz and H.M.Soner, Weak Approximations of G –Expectations, submit-ted.[5] Y.Hu and S.Peng, Some Estimates for Martingale Representation under G –Expectation, preprint.[6] M.Nutz and H.M.Soner, Superhedging and Dynamic Risk Measures under Volatility Un-certainty, submitted.[7] S.Peng,
G–expectation, G–Brownian motion and related stochastic calculus of Itˆo type,
Stochgastic Analysis and Applications, volume 2 of Abel Symp., (2007), 541–567,Springer Berlin.[8] S.Peng,
Multi–dimensional G –Brownian motion and related stochastic calculus under G –expectation., Stochastic. Processes. Appl., (2008), 2223–2253. pproximations of G –Expectations 13 [9] S.Peng, Nonlinear expectations and stochastic calculus under uncertainty, preprint.[10] S.Peng
Tightness, weak compactness of nonlinear expectations and application to CLT ,preprint.[11] A.I Sakhanenko,
A New Way to Obtain Estimates in the Invariance Principle,
HighDimensional Probability II, (2000) 221–243.[12] H.M.Soner, N.Touzi, and J.Zhang,
Martingale representation theorem for the G –expectation, Stochastic. Processes. Appl., (2011), 265–287.(2011), 265–287.