Doubly Reflected BSDEs with Integrable Parameters and Related Dynkin Games
aa r X i v : . [ m a t h . P R ] J u l Doubly Reflected BSDEs with IntegrableParameters and Related Dynkin Games ∗ Erhan Bayraktar †‡ , Song Yao § Abstract
We study a doubly reflected backward stochastic differential equation (BSDE) with integrable parameters andthe related Dynkin game. When the lower obstacle L and the upper obstacle U of the equation are completelyseparated, we construct a unique solution of the doubly reflected BSDE by pasting local solutions, and show thatthe Y − component of the unique solution represents the value process of the corresponding Dynkin game under g − evaluation, a nonlinear expectation induced by BSDEs with the same generator g as the doubly reflected BSDEconcerned. In particular, the first time τ ∗ when process Y meets L and the first time γ ∗ when process Y meets U form a saddle point of the Dynkin game. Keywords:
BSDEs, reflected BSDEs, doubly reflected BSDEs, g − evaluation/expectation, penalization, op-timal stopping problems, pasting local solutions, Dynkin games, saddle points. In this paper, we study a doubly reflected backward stochastic differential equation with generator g , integrableterminal data ξ and two integrable obstacles L , U Y t = ξ + Z Tt g ( s, Y s , Z s ) ds + K T − K t − J T + J t − Z Tt Z s dB s , t ∈ [0 , T ] ,L t ≤ Y t ≤ U t , t ∈ [0 , T ] , Z T ( Y t − L t ) dK t = Z T ( U t − Y t ) dJ t = 0 (flat-off conditions) . (1.1)A solution of such an equation consists of four adapted processes: a continuous process Y , a locally square-integrableprocess Z and two continuous increasing processes K and J . Klimsiak [38] studied the same problem but assumedan extended Mokobodzki’s condition : there exists a semi-martingale between L and U , which is practically difficultto verify. Instead, we only require the two obstacles L , U to be completely separable, i.e. L t < U t , ∀ t ∈ [0 , T ].Backward stochastic differential equations (BSDEs) were introduced in linear case by Bismut [9] as the adjointequations for the stochastic Pontryagin maximum principle in control theory. Later, Pardoux and Peng [42] extendedthem to a fully nonlinear version Y t = ξ + Z Tt g ( s, Y s , Z s ) ds − Z Tt Z s dB s , t ∈ [0 , T ] , (1.2)and showed that the BSDE admits a unique solution ( Y, Z ) when generator g is Lipschitz continuous in ( y, z ) andterminal datum ξ is square-integrable. Since then, the theory of BSDEs has rapidly grown and been applied in many ∗ We would like to thanks the referees for their careful reading and helpful comments which helped us improve our paper. † Department of Mathematics, University of Michigan, Ann Arbor, MI 48109; email: [email protected] . ‡ E. Bayraktar is supported in part by the National Science Foundation a Career grant DMS-0955463 and an Applied Mathematics Re-search grant DMS-1118673, and in part by the Susan M. Smith Professorship. Any opinions, findings, and conclusions or recommendationsexpressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. § Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260; email: [email protected] . RBSDEs with Integrable Parameters 2areas such as mathematical finance, theoretical economics, stochastic control, stochastic differential games, partialdifferential equations (see e.g. the references in [21] or in [15]).As a variation of BSDEs, a BSDE with one reflecting obstacle (say lower obstacle L ) L t ≤ Y t = ξ + Z Tt g ( s, Y s , Z s ) ds + K T − K t − Z Tt Z s dB s , t ∈ [0 , T ] , Z T ( Y t − L t ) dK t = 0 (flat-off condition) . (1.3)was first studied by El Karoui et al. [20]. If g is Lipschitz continuous in ( y, z ) and if both terminal datum ξ andlower obstacle L are square-integrable, these authors showed that the reflected BSDE has a unique solution ( Y, Z, K )and that the Y − component of the unique solution is the Snell envelope of the reward process L in the relatedoptimal stopping problem under g − evaluation (for a more general statement, see e.g. appendix A of [13], Section7 of [7]). As a nonlinear expectation induced by BSDEs with the same generator g as the reflected BSDE, the g − evaluation possesses many (martingale) properties of the classic linear expectation and thus become a very usefultool in nonlinear analysis. In particular, the g − evaluation is closely related to risk measures in mathematical finance.Based on [20], Cvitani´c and Karatzas [14] extended the research of BSDEs to those with two reflecting obstacles.They showed that a doubly reflected BSDE with Lipschitz generator, square-integrable terminal datum and square-integrable obstacles admits a unique solution under Mokobodzki’s condition (there exists a quasimartingale betweentwo obstacles) or certain regularity condition on one of the obstacles (cid:0) see assumption (H) of [28] for a simplifiedform (cid:1) . Cvitani´c and Karatzas also found that the Y − component of the unique solution is exactly the value processof the related Dynkin game, a zero-sum stochastic differential game of optimal stopping, under g − evaluation (fora more general statement, see e.g. [17]). From a perspective of mathematical finance, this discovery is significantfor the evaluation of American game options or Israeli options, see e.g. Hamad`ene [24]. Later, Hamad`ene et al.[29, 27, 24] added controls into a doubly reflected BSDE and the drift coefficient of the associated state process toanalyze a mixed zero-sum controller and stopper game as well as the corresponding saddle point problem. For theliterature and the recent advances of Dynkin games, see e.g. [36, 48, 32, 4]. As to the history and latest developmentof controller and stopper games, see e.g. [35, 37, 5, 6, 3, 2, 18, 41, 8].Among other development in doubly reflected BSDEs, Lepeltier and San Mart´ın [39] obtained the existence resultwhen g is only continuous and has linear growth in variables ( y, z ); Xu [49] got the wellposedness result when theLipschitz continuity of g in y − variable is relaxed to a monotonicity condition; and Bahlali et al. [1], Essaky et al.[23, 22] analyzed the existence of a maximal solution when g has quadratic growth in z − variable.All the above articles on doubly reflected BSDEs, except [24], assumed either (extended) Mokobodzki’s conditionor the aforementioned regularity condition. According to [24]’s observation that the existence of local solutions of adoubly reflected BSDE relies on neither of these two conditions, Hamad`ene and Hassani [25] pasted local solutions toform a unique solution of a doubly reflected BSDE with two distinct obstacles. Since then, the complete separationof obstacles has been postulated by most of the subsequent papers including [12, 19, 26, 31] as well as the presentone.During the evolution of the BSDE theory, some efforts were made to weaken the square integrability on terminaldata so as to match up with the fact that linear BSDEs are well-posed for integrable terminal data: El Karoui etal. [21] demonstrated that for any p − integrable terminal datum with p ∈ (1 , ∞ ), a BSDE with Lipschitz generatoradmits a unique p − integrable solution. This wellposedness result was later upgraded by Briand et al. [10, 11] whoreduced the Lipschitz condition of generator g on y − variable to a monotonicity condition on y . After Hamad`eneand Popier [30] extended [11]’s results for reflected BSDEs, Hamad`ene et al. [19] make a further generalization fordoubly reflected BSDEs with two completely separate obstacles.We dedicate this paper to the solvability of the doubly reflected BSDE (1.1) with integrable parameters and willdiscuss the related Dynkin game. Besides the monotonicity condition on y − variable and the Lipschitz condition on z − variable, if the generator g additionally has a growth condition on z − variable of order α ∈ (0 , (cid:0) see (H7) of [11]or (H5) in the current paper (cid:1) , then the BSDE with integrable terminal datum admits a unique solution ( Y, Z ) suchthat both Y and Z are p − integrable processes for any p ∈ (0 ,
1) and that Y is of class (D). So the corresponding g − evaluation is well-defined for each integrable random variable. Under the same hypotheses on generator g asSection 6 of [11], we will demonstrate a similar wellposedness result for doubly reflected BSDEs with integrable . Introduction L p − solutions, p > p = 1 or class (D) case. Wemanaged to derive some novel estimation and approximation scheme.To construct a unique solution of a reflected BSDE with integrable terminal datum ξ and integrable lowerobstacle L , we use the penalization method introduced in [20] together with a localization technique. This is becausethe approximating solutions are only p − integrable ( ∀ p ∈ (0 , n ∈ N , we compensate the generator g by n times the distance that y − variable is below L t , i.e. g n ( t, y, z ) := g ( t, y, z ) + n ( y − L t ) − . The BSDE with generator g n and terminal datum ξY nt = ξ + Z Tt g ( s, Y ns , Z ns ) ds + n Z Tt (cid:0) Y ns − L s (cid:1) − ds − Z Tt Z ns dB s , t ∈ [0 , T ] , (1.4)has a unique p − integrable ( ∀ p ∈ (0 , Y n , Z n ) such that Y n is of class (D). The monotonicity of { g n } n ∈ N implies that of { Y n } n ∈ N , thanks to a general comparison result (Proposition 3.2). Then we can find a stopping time τ ℓ such that | Y n | is uniformly bounded by ℓ over the stochastic interval [[0 , τ ℓ ]]. By a local estimation (Lemma A.2),the local L − norms of Z n ’s are uniformly bounded by a multiple of ℓ . So up to a subsequence, Z n weakly convergesto some Z ℓ . Consequently, we can deduce that K nt := n R t (cid:0) Y ns − L s (cid:1) − ds converges to K ℓt := Y − Y t − R t g ( s, Y s , Z ℓs ) ds + R t Z ℓs dB s uniformly over [[0 , τ ℓ ]]. Letting n → ∞ in (1.4) shows that ( Y, Z ℓ , K ℓ ) is a local solution of (1.3) over [[0 , τ ℓ ]].Pasting up ( Y, Z ℓ , K ℓ )’s over stochastic intervals ]] τ ℓ − , τ ℓ ]]’s we obtain a global p − integrable ( ∀ p ∈ (0 , Y, Z, K ) of (1.3). The uniqueness of such a solution follows from a comparison result (Proposition 5.3) of reflectedBSDEs, which is a corollary of Proposition 3.2.Applying Proposition 3.2 again shows that with respect to the corresponding g − evaluation, the Y − componentof the unique solution of (1.3) is a supermartingale and even a martingale up to the first time when process Y meetsthe lower obstacle L . Consequently, Y is the Snell envelope of the reward process L in the related optimal stoppingproblem in which the player is trying to select a best exit time from the game so as to maximize her expected rewardunder g − expectation.Based on the wellposedness result for reflected BSDEs with integrable parameters, we next take [25]’s approachof pasting local solutions to construct a global solution of (1.1): Let ( Y n , Z n , K n ) be the unique p − integrable( ∀ p ∈ (0 , g n and the upper obstacle U . We firstshow that the increasing limit Y of Y n ’s, together with some processes ( Z ℓ , K ℓ ), solves (1.3) over some stochasticintervals [[ ν ℓ , ν ′ ℓ ]] for any ℓ ∈ N . A reverse conclusion can be obtained for the limit e Y of a decreasing scheme thatinvolves reflected BSDEs with generator e g n ( t, y, z ) := g ( t, y, z ) − n ( y − U t ) + and the lower obstacle L : For someprocesses ( e Z ℓ , e J ℓ ), ( e Y , e Z ℓ , e J ℓ ) solves a reflected BSDE with upper obstacle U over some stochastic interval [[ ν ′ ℓ , ν ℓ +1 ]]for any ℓ ∈ N . Then pasting ( Y, Z ℓ , K ℓ ,
0) and ( e Y , e Z ℓ , , e J ℓ ) alternatively over [[ ν ℓ , ν ′ ℓ ]] and [[ ν ′ ℓ , ν ℓ +1 ]] yields a global p − integrable ( ∀ p ∈ (0 , g − evaluation, the Y − componentof the solution of (1.1) just constructed is a submartingale up to the first time τ ∗ when Y meets the lower obstacle L , and is a supermartingale up to time γ ∗ when Y meets the upper obstacle U . Consequently, Y is the value processof the related Dynkin game under g − evaluation in which L (cid:0) resp. U (cid:1) is the amount process a player will receivefrom her opponent when she stops the game earlier (cid:0) resp. not earlier (cid:1) than her opponent. The uniqueness result of(1.1) then easily follows. Moreover, the pair ( τ ∗ , γ ∗ ) forms a saddle point of such a Dynkin game.Since dealing mostly with p − integrable ( ∀ p ∈ (0 , p − integrability ( ∀ p ∈ (0 , Y in the penalization scheme, we appropriately exploit Tanaka-Iˆto’s formula, Hypothesis (H5) and other tricks, see inparticular the proof of (6.14).The rest of the paper is organized as follows: After listing necessary notations, we give the definition of doublyreflected BSDEs and make some assumptions on their generators g in Section 1. We first present in Section 2 themain result of our paper, a wellposedness result of doubly reflected BSDEs with integrable parameters as well as the g − martingale characterization of the Y − component of the unique solution, the latter of which implies that Y is avalue process of the related Dynkin games under g − evaluation. Section 3 recalls a wellposedness result of BSDEswith integrable terminal data and gives a general comparison result for BSDEs over stochastic intervals, whichRBSDEs with Integrable Parameters 4plays an important role in our analysis. The unique solutions of BSDEs with generator g and integrable terminaldata induce a widely-defined nonlinear expectation, called “ g − evaluation/expectation”, whose properties will bediscussed in Section 4. In Section 5, to construct a unique solution for a reflected BSDE with integrable parametersas a preparation for our main result, we use the penalization method which involves two auxiliary monotonicityresults. And we show that the Y − component of the unique solution of the reflected BSDE is exactly the Snellenvelope in the related optimal stopping problem under g − evaluation. Section 6 contains proofs of our results whilethe demonstration of some technical claims are deferred to the Appendix. Throughout this paper, we fix a time horizon T ∈ (0 , ∞ ), and let B be a d − dimensional standard Brownian Motiondefined on a complete probability space (Ω , F , P ). The augmented filtration generated by B F = {F t := σ ( σ ( B s ; s ∈ [0 , t ]) ∪ N ) } t ∈ [0 ,T ] satisfies the usual hypothesis , where N collects all P − null sets in F .Let T be the set of all F − stopping times τ taking values in [0 , T ]. For any ν, τ ∈ T with ν ≤ τ , we set T ν,τ := { γ ∈ T : ν ≤ γ ≤ τ } . An increasing sequence { τ n } n ∈ N in T is called “stationary” if for P − a.s. ω ∈ Ω, T = τ n ( ω )for some n = n ( ω ) ∈ N . As usual, we say that a B (cid:0) [0 , T ] (cid:1) ⊗ F− measurable process X is of class ( D ), with respect to( T , P ), if { X τ } τ ∈T is P − uniformly integrable. Moreover, we let P denote the F − progressively measurable σ − fieldon [0 , T ] × Ω and will use the convention inf ∅ := ∞ .Let p ∈ (0 , ∞ ). It holds for any finite subset { a , · · · , a n } of (0 , ∞ ) that (cid:0) ∧ n p − (cid:1) n X i =1 a pi ≤ n X i =1 a i ! p ≤ (cid:0) ∨ n p − (cid:1) n X i =1 a pi . (1.5)And for any p ′ ∈ ( p, ∞ ), one has x p ≤ x p ′ , ∀ x ∈ (0 , ∞ ) . (1.6)The following spaces will be frequently used in the sequel.1) For any sub − σ − field G of F , let L ( G ) be the space of all real-valued, G− measurable random variables ξ and set L p ( G ) := n ξ ∈ L ( G ) : k ξ k L p ( G ) := (cid:8) E [ | ξ | p ] (cid:9) ∧ p < ∞ o .2) We need the following subspaces of S , which denotes all real-valued, F − adapted continuous processes: • S p := n X ∈ S : k X k S p := (cid:8) E [( X ∗ ) p ] (cid:9) ∧ p < ∞ o , where X ∗ := sup t ∈ [0 ,T ] | X t | ; • S p + := n X ∈ S : X + = X ∨ ∈ S p o and S p − := n X ∈ S : X − = ( − X ) ∨ ∈ S p o ; • V := (cid:8) X ∈ S : X is of finite variation (cid:9) ; • K := (cid:8) X ∈ S : X is an increasing process with X = 0 (cid:9) ⊂ V ; • K p := (cid:8) X ∈ K : X T ∈ L p ( F T ) (cid:9) .3) Let e H , (resp. H , ) denote the space of all R d − valued, F − progressively measurable (resp. F − predictable) pro-cesses X with R T | X t | dt < ∞ , P − a.s. and set H ,p := (cid:26) X ∈ H , : k X k H ,p := n E h(cid:0) R T | X t | dt (cid:1) p/ io ∧ (1 /p ) < ∞ (cid:27) .In the above notations, if p ≥ k · k Ξ p is a norm on Ξ p = L p ( G ) , S p , H ,p . And if p ∈ (0 , X, X ′ ) → k X − X ′ k Ξ p defines a distance on Ξ p , under which Ξ p is a complete metric space.Let us recall the notions of backward stochastic differential equations (BSDEs), reflected BSDEs and doublyreflected BSDEs: A (basic) parameter pair ( ξ, g ) consists of a real-valued, F T − measurable random variable ξ and afunction g : [0 , T ] × Ω × R × R d → R that is P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable. .1 Notation and Definitions Definition 1.1.
Given a parameter pair ( ξ, g ) , let L, U ∈ S such that P { L t ≤ U t , ∀ t ∈ [0 , T ] } = 1 and L T ≤ ξ ≤ U T , P − a.s. We say that 1 (cid:1) ( Y, Z ) ∈ S × e H , is a solution of a BSDE with terminal data ξ and generator g (cid:0) BSDE ( ξ, g ) for short (cid:1) if (1.2) holds P − a.s. 2 (cid:1) A triplet ( Y, Z, K ) ∈ S × e H , × K is a solution of a reflected BSDE withterminal data ξ , generator g and (cid:0) lower (cid:1) obstacle L (cid:0) RBSDE ( ξ, g, L ) for short (cid:1) if (1.3) holds P − a.s. 3 (cid:1) A quadruplet ( Y, Z, K, J ) ∈ S × e H , × K × K is a solution of a doubly reflected BSDE with terminal data ξ , generator g , lowerobstacle L and upper obstacle U (cid:0) DRBSDE ( ξ, g, L, U ) for short (cid:1) if (1.1) holds P − a.s. Remark 1.1.
Given a parameter pair ( ξ, g ) , g − ( t, ω, y, z ) := − g ( t, ω, − y, − z ) , ∀ ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d (1.7) clearly defines a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable function. For any L ∈ S with L T ≤ ξ , P − a.s., ( Y, Z, K ) ∈ S × e H , × K solves RBSDE ( ξ, g, L ) if and only if ( e Y , e Z, e J ) = ( − Y, − Z, K ) ∈ S × e H , × K is a solution of thefollowing reflected BSDE with terminal data e ξ = − ξ , generator g − and upper obstacle U = − L : U t ≥ e Y t = e ξ + Z Tt g − ( s, e Y s , e Z s ) ds − e J T + e J t − Z Tt e Z s dB s , t ∈ [0 , T ] , Z T ( U t − e Y t ) d e J t = 0 . ( flat-off condition ) (1.8)Let g : [0 , T ] × Ω × R × R d → R be a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable function. To study doubly reflectedBSDEs with generator g and integrable parameters ( ξ, L, U ), we will make the following assumptions on function g : Standing assumptions on g . Let κ > λ ∈ R , α ∈ (0 ,
1) and let { h t } t ∈ [0 ,T ] be a non-negative integrable process (cid:0) i.e. h ∈ L ([0 , T ] × Ω , B ([0 , T ]) ⊗F , dt ⊗ P ) (cid:1) . It holds dt ⊗ d P − a.s. that (H1) | g ( t, ω, y, z ) − g ( t, ω, y, z ′ ) | ≤ κ | z − z ′ | , ∀ y ∈ R , ∀ z, z ′ ∈ R d ; (H2) sgn( y − y ′ ) · ( g ( t, ω, y, z ) − g ( t, ω, y ′ , z )) ≤ λ | y − y ′ | , ∀ y, y ′ ∈ R , ∀ z ∈ R d ; (H3) y → g ( t, ω, y, z ) is continuous, ∀ z ∈ R d ; (H4) | g ( t, ω, y, | ≤ h t ( ω ) + κ | y | , ∀ y ∈ R ; (H5) | g ( t, ω, y, z ) − g ( t, ω, y, | ≤ κ ( h t ( ω ) + | y | + | z | ) α , ∀ ( y, z ) ∈ R × R d .From now on, for any p ∈ [0 , ∞ ) we let C p be a generic constant depending on p, κ, λ + , T and E R T h t dt (cid:0) inparticular, C will denote a generic constant depending on κ, λ + , T and E R T h t dt (cid:1) , whose form may vary from line toline. For convenience, we will call a function g : [0 , T ] × Ω × R × R d → R a “generator” if it is P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable and satisfies (H1) − (H5). Remark 1.2.
If a function g : [0 , T ] × Ω × R × R d → R is Lipschitz continuous in y (cid:0) i.e. for some e κ > , it holds dt ⊗ d P − a.s. that | g ( t, ω, y, z ) − g ( t, ω, y ′ , z ) | ≤ e κ | y − y ′ | , ∀ y, y ′ ∈ R , ∀ z ∈ R d (cid:1) , then (cid:0) H2 (cid:1) automatically holds and (cid:0) H4 (cid:1) will be replaced by | g ( t, ω, , | ≤ h t ( ω ) , dt ⊗ d P − a.s. Remark 1.3.
Let g be a generator.1 ) The function g − defined in (1.7) is also a generator.2 ) Given τ ∈ T , since (cid:8) { t ≤ τ } (cid:9) t ∈ [0 ,T ] is an F − adapted c`agl`ad process (cid:0) and thus F − predictable (cid:1) , the measurability of g implies that g τ ( t, ω, y, z ) := { t ≤ τ ( ω ) } g ( t, ω, y, z ) , ∀ ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d (1.9) defines a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable function. And one can deduce that g τ also satisfies ( H1 ) − ( H5 ) (cid:0) actually, it satisfies (H2) with e λ = λ ∨ (cid:1) .3 ) If g ′ is another generator, so is ag + bg ′ for any a, b > .4 ) Given L ∈ S , g L ( t, ω, y ) := ( y − L t ( ω )) − , ( t, ω, y ) ∈ [0 , T ] × Ω × R is clearly a P ⊗ B ( R ) / B ( R ) − measurablefunction that is Lipschitz continuous in y and satisfies E R T g L ( t, dt = E R T L + t dt ≤ T k L + k S < ∞ . By Remark1.2, g L satisfies ( H2 ) − ( H4 ) . Then part 3 ) shows that for any n ∈ N g n ( t, ω, y, z ) := g ( t, ω, y, z ) + n ( y − L t ( ω )) − , ∀ ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d (1.10) defines a generator. RBSDEs with Integrable Parameters 6
The contribution of this paper is the following wellposedness result of a doubly reflected BSDE with integrableparameters in which the Y − component of the unique solution represents the value of the related Dynkin game undera so-called “ g − evaluation” (see Section 4), a nonlinear expectation induced by BSDEs with the same generator g as the doubly reflected BSDE. Like [25], we assume the complete separation of the lower and upper obstacles in thedoubly reflected BSDE instead of the traditional Mokobodski condition which is quite difficult to check in practice. Theorem 2.1.
Let g be a generator. For any ξ ∈ L ( F T ) , L ∈ S and U ∈ S − such that P { L T ≤ ξ ≤ U T } = P { L t
Proposition 3.1.
Let g be a generator. For any ξ ∈ L ( F T ) , BSDE ( ξ, g ) admits a unique solution ( Y, Z ) ∈∩ p ∈ (0 , ( S p × H ,p ) such that Y is of class ( D ) . This wellposedness result leads to a general martingale representation theorem:
Corollary 3.1.
For any ξ ∈ L ( F T ) , there exists a unique Z ∈ ∩ p ∈ (0 , H ,p such that P − a.s. E [ ξ |F t ] = E [ ξ ] + Z t Z s dB s , t ∈ [0 , T ] . (3.1) . g − Evaluations and g − Expectations g − evaluation/expectation” (see next section), a nonlinear expectation underwhich the value of optimal stopping problem (resp. Dynkin game) solves the corresponding reflected BSDE (resp.double reflected BSDE) with generator g , see (5.2) (cid:0) resp. (2.3) (cid:1) .To derive a corresponding comparison result of Proposition 3.1 (which is crucial for the penalty method in solvingreflected BSDEs with integrable parameters), we need the following mere generalization of Lemma 2.2 of [11] (cf.Corollary 1 of [30]): Lemma 3.1.
Given V ∈ V , if ( Y, Z ) ∈ S × e H , satisfies that P − a.s. Y t = Y + V t − V + Z t Z s dB s , t ∈ [0 , T ] , then it holds for any p ∈ (1 , ∞ ) that P − a.s. | Y t | p = | Y | p + p Z t sgn ( Y s ) | Y s | p − dV s + p Z t sgn ( Y s ) | Y s | p − Z s dB s + p ( p − Z t { Y s =0 } | Y s | p − | Z s | ds, t ∈ [0 , T ] . With help of Lemma 3.1, we can deduce a general comparison result for BSDEs over stochastic intervals, whichis critical in proving Theorem 5.1 and our main result, Theorem 2.1:
Proposition 3.2.
Given ν, τ ∈ T with ν ≤ τ , for i = 1 , let g i : [0 , T ] × Ω × R × R d → R be an P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable function and let (cid:0) Y i , Z i , V i (cid:1) ∈ S × H , × V such that { Y iγ } γ ∈T ν,τ is uniformly integrable, that E h(cid:0) R τν | Z it | dt (cid:1) p/ i < ∞ for some p ∈ ( α, , and that P − a.s. Y it = Y iτ + Z τt g i ( s, Y is , Z is ) ds + V iτ − V it − Z τt Z is dB s , ∀ t ∈ [ ν, τ ] . (3.2) Assume that Y τ ≤ Y τ , P − a.s. and that P − a.s. Z st { Y r >Y r } ( dV r − dV r ) ≤ , ∀ t, s ∈ [ ν, τ ] with t < s. (3.3) For either i = 1 or i = 2 , if g i satisfies (cid:0) H1 (cid:1) , (cid:0) H2 (cid:1) , (cid:0) H5 (cid:1) and if g ( t, Y − it , Z − it ) ≤ g ( t, Y − it , Z − it ) , dt ⊗ d P − a.s. onthe stochastic interval [[ ν, τ ]] := (cid:8) ( t, ω ) ∈ [0 , T ] × Ω : ν ( ω ) ≤ t ≤ τ ( ω ) (cid:9) , then it holds P − a.s. that Y t ≤ Y t for any t ∈ [ ν, τ ] . Applying Proposition 3.2 over period [0 , T ] with V = V ≡
0, we obtain the following comparison result forBSDEs whose Y − solutions are of class (D) and whose Z − solutions are of H ,p for some p ∈ ( α, Proposition 3.3.
For i = 1 , , given parameter pair (cid:0) ξ i , g i (cid:1) with ξ ≤ ξ , P − a.s., let (cid:0) Y i , Z i (cid:1) be a solution ofBSDE (cid:0) ξ i , g i (cid:1) such that Y i is of class ( D ) and Z i ∈ ∪ p ∈ ( α, H ,p . For either i = 1 or i = 2 , if g i satisfies (cid:0) H1 (cid:1) , (cid:0) H2 (cid:1) , (cid:0) H5 (cid:1) and if g ( t, Y − it , Z − it ) ≤ g ( t, Y − it , Z − it ) , dt ⊗ d P − a.s., then it holds P − a.s. that Y t ≤ Y t for any t ∈ [0 , T ] . g − Evaluations and g − Expectations
Let g be a generator. For any τ ∈ T , since the function g τ defined in (1.9) is a generator, Proposition 3.1 shows thatfor any ξ ∈ L ( F T ), the BSDE( ξ, g τ ) admits a unique solution (cid:0) Y τ,ξ , Z τ,ξ (cid:1) ∈ ∩ p ∈ (0 , ( S p × H ,p ) (4.1)such that Y τ,ξ is of class (D). Then we can introduce the notion of “ g − evaluation/expectation”, which slightlygeneralizes the one initiated in [43] and [45]: Definition 4.1.
A family of operators E gν,τ : L ( F τ ) → L ( F ν ) , ν ∈ T , τ ∈ T ν,T is called a “ g − evaluation” if for any ν, τ ∈ T with ν ≤ τ and any ξ ∈ L ( F τ ) , E gν,τ [ ξ ] := Y τ,ξν ∈ L ( F ν ) if ξ ∈ L ( F τ ); −∞ , if E [ ξ − ] = ∞ ; ∞ , if E [ ξ − ] < ∞ and E [ ξ + ] = ∞ . RBSDEs with Integrable Parameters 8
In particular, for any ν ∈ T and ξ ∈ L ( F T ) we refer to E g [ ξ |F ν ] := E gν,T [ ξ ] as “ g − expectation” of ξ conditional on the σ − field F ν . Remark 4.1. If g is independent of ( y, z ) , i.e., if { g t } t ∈ [0 ,T ] is an F − progressively measurable process with E R T | g t | dt < ∞ , then for any ν ∈ T , τ ∈ T ν,T E gν,τ [ ξ ] = E (cid:20) ξ + Z τν g t dt (cid:12)(cid:12)(cid:12)(cid:12) F ν (cid:21) , P − a.s. , ∀ ξ ∈ L ( F τ ) . (4.2) When g ≡ , the g − expectation degenerates into the classic linear expectation, i.e. for any ν ∈ T and ξ ∈ L ( F T ) , E g [ ξ |F ν ] = E [ ξ |F ν ] , P − a.s. In light of Proposition 3.3 and the uniqueness result in Proposition 3.1, one can deduce that g − evaluation withdomain L ( F T ) inherits the following basic properties from the classic linear expectation: Let ν , τ ∈ T with ν ≤ τ (1) “Monotonicity”: For any ξ, η ∈ L ( F τ ) with ξ ≤ η , P − a.s. we have E gν,τ [ ξ ] ≤ E gν,τ [ η ], P − a.s.;(2) “Time-consistency”: For any γ ∈ T ν,τ and ξ ∈ L ( F τ ), E gν,γ (cid:2) E gγ,τ [ ξ ] (cid:3) = E gν,τ [ ξ ], P − a.s.;(3) “Constant-Preserving”: If it holds dt ⊗ d P − a.s. that g ( t, y,
0) = 0, ∀ y ∈ R , then E gν,τ [ ξ ] = ξ , P − a.s. for any ξ ∈ L ( F ν );(4) “Zero-one Law”: For any ξ ∈ L ( F τ ) and A ∈ F ν , we have A E gν,τ [ A ξ ] = A E gν,τ [ ξ ], P − a.s.; In addition, if g ( t, ,
0) = 0, dt ⊗ d P − a.s., then E gν,τ [ A ξ ] = A E gν,τ [ ξ ], P − a.s.;(5) “Translation Invariant”: If g is independent of y , then E gν,τ [ ξ + η ] = E gν,τ [ ξ ] + η , P − a.s. for any ξ ∈ L ( F τ ) and η ∈ L ( F ν ).We can define the corresponding g − martingales as usual: A B (cid:0) [0 , T ] (cid:1) ⊗ F− measurable process X of class (D) iscalled an g − submartingale (cid:0) resp. g − supermartingale or g − martingale (cid:1) if for any 0 ≤ t ≤ s ≤ E gt,s [ X s ] ≥ (resp. ≤ or =) X t , P − a.s. (4.3)The g − martingales possess many classic martingale properties such as Upcrossing inequality , Optional samplingtheorem , Doob-Meyer decomposition and etc, which relate the g − evaluation closely to risk measures in mathematicalfinance (cid:0) see [46], [47] for the case of Lipschitz g − evaluation with domain L ( F T ) and see [40], [34] for the caseof quadratic g − evaluation with domain L ∞ ( F T ) (cid:1) . Due to the page limitation, we will elaborate neither on themartingale properties of our g − evaluation with domain L ( F T ) nor on the connection of this g − evaluation to riskmeasures in the present paper. With Proposition 3.1 and Proposition 3.3, we can employ the penalization method to obtain, as an intermediate steptowards our goal (Theorem 2.1), the following wellposedness result of a reflected BSDE with integrable parameters,in which the Y − component of the unique solution stands for the value of the related optimal stopping problem under g − evaluation. Theorem 5.1.
Let g be a generator. For any ξ ∈ L ( F T ) and L ∈ S with L T ≤ ξ , P − a.s., RBSDE ( ξ, g, L ) admits aunique solution ( Y, Z, K ) ∈ ∩ p ∈ (0 , ( S p × H ,p × K p ) such that Y is of class ( D ) .Define R t := { t Proposition 5.1. Let L ∈ S and let g : [0 , T ] × Ω × R × R d → R be a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable functionsatisfying ( H1 ) , ( H4 ) and ( H5 ) . For any n ∈ N , consider the function g n defined in (1.10) and let ( Y n , Z n , J n ) ∈ (cid:18) ∩ p ∈ (0 , S p (cid:19) × H , × K such that Y n is of class ( D ) and that P − a.s. Y nt = Y nT + Z Tt g n ( s, Y ns , Z ns ) ds − J nT + J nt − Z Tt Z ns dB s , t ∈ [0 , T ] . If { Y n } n ∈ N is an increasing sequence of processes, then its limit Y t := lim n →∞ ↑ Y nt , t ∈ [0 , T ] is an F − predictable processof class ( D ) that satisfies E " sup t ∈ [0 ,T ] | Y t | p < ∞ , ∀ p ∈ (0 , . Proposition 5.2. Let L ∈ S , let g : [0 , T ] × Ω × R × R d → R be a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable functionsatisfying ( H1 ) − ( H4 ) , and let ν, τ ∈ T with ν ≤ τ . For any n ∈ N , consider the function g n defined in (1.10) and let ( Y n , Z n ) ∈ S × H , satisfies that P − a.s. Y nt = Y nτ + Z τt g n ( s, Y ns , Z ns ) ds − Z τt Z ns dB s , ∀ t ∈ [ ν, τ ] . (5.3) If (cid:8) { t ≥ ν } Y nτ ∧ t (cid:9) t ∈ [0 ,T ] , n ∈ N is an increasing sequence of processes whose limit Y t := lim n →∞ ↑ { t ≥ ν } Y nτ ∧ t , t ∈ [0 , T ] satisfies P { Y τ ≥ L τ } = P n sup t ∈ [ ν,τ ] (cid:0) ( Y t ) − + Y + t (cid:1) < ∞ o = 1 , then process { Y ν ∨ t } t ∈ [0 ,T ] has P − a.s. continuous paths andthere exist ( Z, K ) ∈ H , × K such that P − a.s. L t ≤ Y t = Y τ + Z τt g ( s, Y s , Z s ) ds + K τ − K t − Z τt Z s dB s , ∀ t ∈ [ ν, τ ] , Z τν ( Y t − L t ) dK t = 0 . (5.4)On the other hand, the uniqueness result in Theorem 5.1 follows from the following comparison result for reflectedBSDEs whose Y − solutions are of class (D) and whose Z − solutions are of H ,p for some p ∈ ( α, Proposition 5.3. For i = 1 , , given parameter pair (cid:0) ξ i , g i (cid:1) and L i ∈ S such that P { L iT ≤ ξ i } = P { ξ ≤ ξ } = P { L t ≤ L t , ∀ t ∈ [0 , T ] } = 1 , let (cid:0) Y i , Z i , K i (cid:1) be a solution of RBSDE (cid:0) ξ i , g i , L i (cid:1) such that Y i is of class ( D ) and Z i ∈ ∪ p ∈ ( α, H ,p . For either i = 1 or i = 2 , if g i satisfies (cid:0) H1 (cid:1) , (cid:0) H2 (cid:1) , (cid:0) H5 (cid:1) and if g ( t, Y − it , Z − it ) ≤ g ( t, Y − it , Z − it ) , dt ⊗ d P − a.s., then it holds P − a.s. that Y t ≤ Y t for any t ∈ [0 , T ] . Remark 5.2. By Remark 1.3 ( ) , one can apply Theorem 5.1, Proposition 5.2 and Proposition 5.3 to g − (cid:0) definedin (1.7) (cid:1) to obtain a version of them for the reflected BSDE with upper obstacle like (1.8) . RBSDEs with Integrable Parameters 10 Proof of Proposition 3.1: As condition (H7) of [11] is automatically satisfied, it suffices to verify condition (H5)therein, i.e., Given r ≥ ψ rt ( ω ) := sup | y |≤ r | g ( t, ω, y, − g ( t, ω, , | , ( t, ω ) ∈ [0 , T ] × Ω is integrable.By (H3), it holds dt ⊗ d P − a.s. that ψ rt ( ω ) = sup y ∈ [ − r,r ] ∩ Q | g ( t, ω, y, − g ( t, ω, , | , which implies that ψ r is F − progressivelymeasurable. Also, (H4) shows that dt ⊗ d P − a.s., ψ rt ( ω ) ≤ | g ( t, ω, , | + sup | y |≤ r | g ( t, ω, y, | ≤ h t ( ω )+ κr . It follows that ψ rt belongs to L ([0 , T ] × Ω , B ([0 , T ]) ⊗F , dt ⊗ P ). (cid:3) Proof of Corollary 3.1: Clearly, g ( t, ω, y, z ) := 0, ∀ ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d is a generator. In light ofProposition 3.1, BSDE( ξ, 0) admits a unique solution ( Y, Z ) ∈ ∩ p ∈ (0 , ( S p × H ,p ) such that Y is of class (D). For any n ∈ N , we define stopping time τ n := inf (cid:8) t ∈ [0 , T ] : R t | Z s | ds > n (cid:9) ∧ T ∈ T , and see from Z ∈ ∩ p ∈ (0 , H ,p ⊂ H , that { τ n } n ∈ N is stationary.Let t ∈ [0 , T ] and n ∈ N . Since Y τ n ∧ t = Y τ n − R τ n τ n ∧ t Z s dB s , P − a.s., taking conditional expectation E [ ·|F t ] yields that Y τ n ∧ t = E [ Y τ n |F t ] , P − a.s. (6.1)As { τ n } n ∈ N is stationary, letting n → ∞ in (6.1), we can deduce from the continuity of Y and the uniform integrabilityof { Y τ } τ ∈T that Y t = E [ Y T |F t ] = E [ ξ |F t ], P − a.s. In particular, Y = E [ ξ ]. Then E [ ξ |F t ] = Y t = Y + Z t Z s dB s = E [ ξ ] + Z t Z s dB s , P − a.s.This together with the continuity of processes (cid:8) E [ ξ |F · ] (cid:9) t ∈ [0 ,T ] and (cid:8) R t Z s dB s (cid:9) t ∈ [0 ,T ] leads to (3.1) while the unique-ness of process Z is clear. (cid:3) Proof of Proposition 3.2: Without loss of generality, suppose that g satisfies (H1), (H2), (H5) and that g ( t, Y t , Z t ) ≤ g ( t, Y t , Z t ) , dt ⊗ d P − a.s. on [[ ν, τ ]] . (6.2)Set ( Y , Z ) := ( Y − Y , Z − Z ) and q := p/α ∈ (1 , /α ). (1) We first show that E " sup t ∈ [ ν,τ ] (cid:0) Y + t (cid:1) q < ∞ .Since E [ |Y ν | ] ≤ E (cid:2) | Y ν | (cid:3) + E (cid:2) | Y ν | (cid:3) < ∞ by the uniform integrability of { Y iγ } γ ∈T ν,τ , i = 1 , 2, Corollary 3.1 impliesthat there exists a unique e Z ∈ ∩ p ′ ∈ (0 , H ,p ′ such that P (cid:8) E [ Y ν |F t ] = E [ Y ν ]+ R t e Z s dB s , ∀ t ∈ [0 , T ] (cid:9) = 1. This togetherwith (3.2) shows that P − a.s. e Y t := E [ Y ν |F ν ∧ t ]+ Y ν ∨ ( τ ∧ t ) −Y ν = E [ Y ν ]+ Z ν ∧ t e Z s dB s − Z ν ∨ ( τ ∧ t ) ν ∆ g s ds − V ν ∨ ( τ ∧ t ) + V ν + V ν ∨ ( τ ∧ t ) − V ν + Z ν ∨ ( τ ∧ t ) ν Z s dB s = E [ Y ν ] − Z t { ν 1) and let n ∈ N . We define a stopping time γ n := inf (cid:26) t ∈ [ ν, τ ] : sup s ∈ [ ν,t ] Y + s + Z tν |Z s | ds > n (cid:27) ∧ τ ∈T ν,τ , and integrate by parts the process n e aq ( γ n ∧ t ) (cid:16) e Y + γ n ∧ t (cid:17) q o t ∈ [0 ,T ] to obtain that P − a.s. e aq ( γ n ∧ t ) (cid:16) e Y + γ n ∧ t (cid:17) q + q ( q − Z γ n γ n ∧ t { e Y s > } e aqs (cid:16) e Y + s (cid:17) q − (cid:16) { s ≤ ν } (cid:12)(cid:12) e Z s (cid:12)(cid:12) + { s>ν } |Z s | (cid:17) ds = e aqγ n (cid:0) Y + γ n (cid:1) q + q Z γ n γ n ∧ t {Y s > }∩{ s>ν } e aqs (cid:0) Y + s (cid:1) q − ∆ g s ds − aq Z γ n γ n ∧ t e aqs (cid:16) e Y + s (cid:17) q ds − q Z γ n γ n ∧ t e aqs (cid:16) e Y + s (cid:17) q − d L s + q Z γ n γ n ∧ t {Y s > }∩{ s>ν } e aqs (cid:0) Y + s (cid:1) q − ( dV s − dV s ) − q Z γ n γ n ∧ t { e Y s > } e aqs (cid:16) e Y + s (cid:17) q − (cid:16) { s ≤ ν } e Z s + { s>ν } Z s (cid:17) dB s , t ∈ [0 , T ] . Then (6.6), (6.2), (6.7) and (3.3) imply that P − a.s. e aqt (cid:0) Y + t (cid:1) q + q ( q − Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds = e aqγ n (cid:0) Y + γ n (cid:1) q + q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − ∆ g s ds − aq Z γ n t e aqs (cid:0) Y + s (cid:1) q ds − q Z γ n t e aqs (cid:0) Y + s (cid:1) q − d L s + q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − ( dV s − dV s ) − q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − Z s dB s ≤ e aqγ n (cid:0) Y + γ n (cid:1) q + q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − (cid:0) g ( s, Y s , Z s ) − g ( s, Y s , Z s ) (cid:1) ds − qκ ∧ ( q − Z γ n t e aqs (cid:0) Y + s (cid:1) q ds − q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − Z s dB s , ∀ t ∈ [ ν, γ n ] . Since {Y t > } κ (cid:0) Y + t (cid:1) q − |Z t | ≤ q − {Y t > } (cid:0) Y + t (cid:1) q − |Z t | + κ q − (cid:0) Y + t (cid:1) q , ∀ t ∈ [ ν, τ ], we can deduce from (H1) that P − a.s. e aqt (cid:0) Y + t (cid:1) q + q ( q − Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds ≤ e aqγ n (cid:0) Y + γ n (cid:1) q − q Z γ n t {Y s > } e aqs (cid:0) Y + s (cid:1) q − Z s dB s , ∀ t ∈ [ ν, γ n ] . Taking expectation for t = ν shows that q ( q − E Z γ n ν {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds ≤ E h e aqγ n (cid:0) Y + γ n (cid:1) q i . (6.12) .2 Proofs of the results in Section 5 E " sup t ∈ [ ν,γ n ] (cid:0) e at Y + t (cid:1) q ≤ E h e aqγ n (cid:0) Y + γ n (cid:1) q i + q E " sup t ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)(cid:12) Z Tt { ν ≤ s ≤ γ n } {Y s > } e aqs (cid:0) Y + s (cid:1) q − Z s dB s (cid:12)(cid:12)(cid:12)(cid:12) ≤ E h e aqγ n (cid:0) Y + γ n (cid:1) q i + C q E " sup t ∈ [ ν,γ n ] (cid:0) e at Y + t (cid:1) q/ ! · (cid:18)Z γ n ν {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds (cid:19) / ≤ E h e aqγ n (cid:0) Y + γ n (cid:1) q i + 12 E " sup t ∈ [ ν,γ n ] (cid:0) e at Y + t (cid:1) q + C q E Z γ n ν {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds. As E " sup t ∈ [ ν,γ n ] (cid:0) e at Y + t (cid:1) q ≤ e aqT E " sup t ∈ [ ν,τ ] (cid:0) Y + t (cid:1) q < ∞ by (6.11), it follows from (6.12) that E " sup t ∈ [ ν,γ n ] (cid:0) Y + t (cid:1) q ≤ E " sup t ∈ [ ν,γ n ] (cid:0) e at Y + t (cid:1) q ≤ E h e aqγ n (cid:0) Y + γ n (cid:1) q i + C q E Z γ n ν {Y s > } e aqs (cid:0) Y + s (cid:1) q − |Z s | ds ≤ C q E h e aqγ n (cid:0) Y + γ n (cid:1) q i . Because of (6.11) and E h(cid:0) R τν | Z it | dt (cid:1) p/ i < ∞ , i = 1 , 2, it holds for P − a.s. ω ∈ Ω that τ ( ω ) = γ N ′ ω ( ω ) for some N ′ ω ∈ N . Letting n → ∞ in the above inequality, we can deduce from the monotone convergence theorem, (6.11) anddominated convergence theorem that E " sup t ∈ [ ν,τ ] (cid:0) Y + t (cid:1) q = lim n →∞ ↑ E " sup t ∈ [ ν,γ n ] (cid:0) Y + t (cid:1) q ≤ C q lim n →∞ E h e aqγ n (cid:0) Y + γ n (cid:1) q i = C q E h e aqτ (cid:0) ( Y τ − Y τ ) + (cid:1) q i = 0 . (cid:3) Proof of Remark 4.1: Let ν ∈ T , τ ∈ T ν,T . It suffices to show (4.2) for ξ ∈ L ( F τ ). Given n ∈ N , we still definethe stopping time γ n as in (A.3). As Y τ,ξν ∧ γ n = Y τ,ξγ n + R γ n ν ∧ γ n { s ≤ τ } g s ds − R γ n ν ∧ γ n Z τ,ξs dB s , P − a.s., similar to (A.4), takingconditional expectation E (cid:2) · |F ν ∧ γ n (cid:3) yields that Y τ,ξν ∧ γ n = { ν ≤ γ n } E (cid:2) Y τ,ξγ n + R τ ∧ γ n ν ∧ γ n g s ds (cid:12)(cid:12) F ν (cid:3) + { ν>γ n } (cid:16) Y τ,ξγ n + R τ ∧ γ n ν ∧ γ n g s ds (cid:17) , P − a.s. Since { γ n } n ∈ N is stationary, letting n → ∞ , we can deduce from the uniform integrability of (cid:8) Y τ,ξγ (cid:9) γ ∈T that E gν,τ [ ξ ] = Y τ,ξν = { ν ≤ T } E (cid:20) Y τ,ξT + Z τν g s ds (cid:12)(cid:12)(cid:12)(cid:12) F ν (cid:21) + { ν>T } (cid:18) Y τ,ξT + Z τν g s ds (cid:19) = E (cid:20) ξ + Z τν g s ds (cid:12)(cid:12)(cid:12)(cid:12) F ν (cid:21) , P − a.s. (cid:3) Proof of Proposition 5.1: (1) We first show that E [( Y ∗ ) p ] < ∞ , ∀ p ∈ (0 , . As the limit of F − adapted continuous processes Y n ’s (thus F − predictable), Y is also an F − predictable process.For any ( t, ω ) ∈ [0 , T ] × Ω, Y t ( ω ) = lim n →∞ ↑ Y nt ( ω ) implies Y + t ( ω ) = lim n →∞ ↑ Y n, + t ( ω ). Then one can deduce that Y + ∗ ( ω ) = sup t ∈ [0 ,T ] Y + t ( ω ) = sup t ∈ [0 ,T ] sup n ∈ N Y n, + t ( ω ) = sup n ∈ N sup t ∈ [0 ,T ] Y n, + t ( ω ) = sup n ∈ N Y n, + ∗ ( ω ) = lim n →∞ ↑ Y n, + ∗ ( ω ) , ∀ ω ∈ Ω . (6.13)For any n ∈ N , the continuity of process Y n shows that P − a.s., Y n, + ∗ = sup t ∈ [0 ,T ] Y n, + t = sup t ∈ [0 ,T ] ∩ Q Y n, + t ∈ F T , which impliesthat Y n, + ∗ is F T − measurable. Then we see from (6.13) that Y + ∗ is also F T − measurable.Let p ∈ ( α, 1) and set η := ξ + + R T h s ds + L + ∗ ∈ L ( F T ). Given n ∈ N , we claim that P − a.s.( Y nt ) + ≤ C α E h η + (cid:0) Y n, + ∗ (cid:1) α (cid:12)(cid:12)(cid:12) F t i , t ∈ [0 , T ] , (6.14)(which will be shown in the last part of this proof). Since M nt := E h η + (cid:0) Y n, + ∗ (cid:1) α (cid:12)(cid:12)(cid:12) F t i , t ∈ [0 , T ] is a uniformlyintegrable martingale, applying Lemma 6.1 of [11], we can deduce from (6.14), (1.5), (1.6), H¨older’s inequality andYoung’s inequality that E (cid:2) ( Y n, + ∗ ) p (cid:3) ≤ C pα E " sup t ∈ [0 ,T ] ( M nt ) p ≤ C pα − p ( E [ M nT ]) p ≤ C pα − p n E [ η ]) p + (cid:0) E (cid:2) ( Y n, + ∗ ) α (cid:3)(cid:1) p o ≤ C pα − p n E [ η ] + (cid:0) E (cid:2)(cid:0) Y n, + ∗ (cid:1) p (cid:3)(cid:1) α o ≤ C α,p (cid:8) E [ η ] (cid:9) + 12 E (cid:2)(cid:0) Y n, + ∗ (cid:1) p (cid:3) . RBSDEs with Integrable Parameters 14As E (cid:2) ( Y n, + ∗ ) p (cid:3) ≤ E [( Y n ∗ ) p ] < ∞ , we see that E (cid:2) ( Y n, + ∗ ) p (cid:3) ≤ C α,p (cid:8) E [ η ] (cid:9) . When n → ∞ , (6.13) and the monotoneconvergence theorem yield that E [( Y + ∗ ) p ] ≤ C α,p (cid:8) E [ η ] (cid:9) . Since | Y t | = Y − t + Y + t ≤ ( Y t ) − + Y + t ≤ | Y t | + Y + t , ∀ t ∈ [0 , T ] , (6.15)(1.5) implies that E [( Y ∗ ) p ] ≤ E (cid:2) ( Y ∗ ) p (cid:3) + E [( Y + ∗ ) p ] < ∞ .Moreover, for any e p ∈ (0 , α ], (1.6) shows that E h ( Y ∗ ) e p i ≤ E h ( Y ∗ ) α +12 i < ∞ . Hence, E [( Y ∗ ) p ] < ∞ , ∀ p ∈ (0 , (2) Next, let us show that Y is of class ( D ) . Since E [( Y + ∗ ) α ] < ∞ , letting n → ∞ in (6.14), one can deduce from (6.13) and the monotone convergencetheorem that for any t ∈ [0 , T ], Y + t ≤ C α E (cid:2) η +( Y + ∗ ) α (cid:12)(cid:12) F t (cid:3) , P − a.s. Using the continuity of process Y + and process (cid:8) E (cid:2) η +( Y + ∗ ) α (cid:12)(cid:12) F t (cid:3) (cid:9) t ∈ [0 ,T ] , we see from (6.15) that P − a.s. | Y t | ≤ | Y t | + Y + t ≤ | Y t | + C α E (cid:2) η +( Y + ∗ ) α (cid:12)(cid:12) F t (cid:3) , t ∈ [0 , T ] . This implies that Y is of class (D) as Y is of class (D). (3) It remains to demonstrate claim (6.14) . For any t ∈ [0 , T ], the continuity of process L shows that P − a.s., Γ t := sup s ∈ [0 ,t ] L + s = sup s ∈ [0 ,t ] ∩ Q L + s ∈ F t , which impliesthat Γ is an F − adapted, continuous increasing process with E [Γ T ] < ∞ .Let n ∈ N . Since R T { Y nt > Γ t } ( Y nt − L t ) − dt = 0, applying Itˆo − Tanaka’s formula to process ( Y n − Γ) + yields that( Y nt − Γ t ) + = ( Y nT − Γ T ) + + Z Tt { Y ns > Γ s } ( g ( s, Y ns , Z ns ) ds − dJ ns − Z ns dB s )+ Z Tt { Y ns > Γ s } d Γ s − 12 ( L nT − L nt ) , t ∈ [0 , T ] , where L n is the “local time” of Y n − Γ at 0.Set a := 2( κ + κ ). Given j ∈ N , we define a stopping time γ j = γ nj := inf { t ∈ [0 , T ] : R t | Z ns | ds > j } ∧ T ∈ T , andintegrate by parts the process (cid:8) e a ( γ j ∧ t ) ( Y nγ j ∧ t − Γ γ j ∧ t ) + (cid:9) t ∈ [0 ,T ] to obtain that P − a.s. e a ( γ j ∧ t ) (cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + + a Z γ j γ j ∧ t e as ( Y ns − Γ s ) + ds = e aγ j ( Y nγ j − Γ γ j ) + + Z γ j γ j ∧ t { Y ns > Γ s } e as ( g ( s, Y ns , Z ns ) ds − dJ ns − Z ns dB s )+ Z γ j γ j ∧ t { Y ns > Γ s } e as d Γ s − Z γ j γ j ∧ t e as d L ns , t ∈ [0 , T ] . (6.16)Since (H4), (H5), (1.5) and (1.6) imply that | g ( t, Y nt , Z nt ) | ≤ | g ( t, Y nt , | + | g ( t, Y nt , Z nt ) − g ( t, Y nt , | ≤ h t + κ | Y nt | + κ ( h t + | Y nt | + | Z nt | ) α ≤ h t + κ | Y nt | + κ ( h t + | Y nt | ) α + κ | Z nt | α ≤ h t + κ | Y nt | + κ (1 + h t + | Y nt | ) + κ | Z nt | α (6.17) ≤ κ + (1 + κ ) h t + 2 κ | Y nt − Γ t | + 2 κ Γ t + κ | Z nt | α , dt ⊗ d P − a.s. , taking conditional expectation E [ ·|F t ] in (6.16), we can deduce from H¨older’s inequality that for any t ∈ [0 , T ] e a ( γ j ∧ t ) (cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + ≤ κT e aT + E (cid:20) e aγ j ( Y nγ j − Γ γ j ) + +(1+ κ ) e aT Z T h s ds +(1+2 κT ) e aT Γ T + κT − α/ e (1 − α ) aT Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ! α/ (cid:12)(cid:12)(cid:12)(cid:12) F t (cid:21) , P − a.s. , (6.18)where we used the fact that { Y nt > Γ t } | Y nt − Γ t | = ( Y nt − Γ t ) + .Applying Itˆo’s formula to process n e a ( γ j ∧ t ) (cid:16) ( Y nγ j ∧ t − Γ γ j ∧ t ) + (cid:17) o t ∈ [0 ,T ] in (6.16) yields that e a ( γ j ∧ t ) (cid:16) ( Y nγ j ∧ t − Γ γ j ∧ t ) + (cid:17) + 2 a Z γ j γ j ∧ t e as (cid:0) ( Y ns − Γ s ) + (cid:1) ds + Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds = e aγ j (cid:16) ( Y nγ j − Γ γ j ) + (cid:17) +2 Z γ j γ j ∧ t { Y ns > Γ s } e as ( Y ns − Γ s ) + ( g ( s, Y ns , Z ns ) ds − dJ ns − Z ns dB s )+2 Z γ j γ j ∧ t { Y ns > Γ s } e as ( Y ns − Γ s ) + d Γ s − Z γ j γ j ∧ t e as ( Y ns − Γ s ) + d L ns , t ∈ [0 , T ] . (6.19) .2 Proofs of the results in Section 5 dt ⊗ d P − a.s. | g ( t, Y nt , Z nt ) | ≤ | g ( t, Y nt , | + | g ( t, Y nt , Z nt ) − g ( t, Y nt , | ≤ h t + κ | Y nt | + κ | Z nt | ≤ h t + κ | Y nt − Γ t | + κ Γ t + κ | Z nt | , it holds dt ⊗ d P − a.s. that { Y nt > Γ t } ( Y nt − Γ t ) + g ( t, Y nt , Z nt ) ≤ ( Y nt − Γ t ) + h t +( κ +2 κ ) (cid:0) ( Y nt − Γ t ) + (cid:1) + 14 { Y nt > Γ t } Γ T + 14 { Y nt > Γ t } | Z nt | . Set Ψ nt := sup s ∈ [ t,T ] ( Y ns − Γ s ) + , t ∈ [0 , T ]. It then follows from (6.19) that e a ( γ j ∧ t ) (cid:18)(cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + (cid:19) + 12 Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ≤ e aT (cid:16) ( Y nγ j ) + (cid:17) +2 e aT Ψ nt Z Tt h s ds + 12 T e aT Γ T + 2 e aT Ψ nt Γ T − Z γ j γ j ∧ t { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s ≤ e aT (cid:16) ( Y nγ j ) + (cid:17) +(Ψ nt ) + C Z T h s ds ! + C Γ T + (cid:12)(cid:12)(cid:12)(cid:12) Z γ j γ j ∧ t { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s (cid:12)(cid:12)(cid:12)(cid:12) , t ∈ [0 , T ] , Taking powers of order α/ α/ − e αa ( γ j ∧ t ) (cid:18)(cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + (cid:19) α + 12 Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ! α/ ≤ e αaT (cid:16) ( Y nγ j ) + (cid:17) α +(Ψ nt ) α + C α Z T h s ds ! α + C α Γ αT + (cid:12)(cid:12)(cid:12)(cid:12) Z Tt { s ≤ γ j } { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s (cid:12)(cid:12)(cid:12)(cid:12) α/ , t ∈ [0 , T ] . (6.20)Let t ∈ [0 , T ]. For any A ∈ F t , since A (cid:12)(cid:12)(cid:12)(cid:12) Z Tt { s ≤ γ j } { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s (cid:12)(cid:12)(cid:12)(cid:12) α/ = (cid:12)(cid:12)(cid:12)(cid:12) Z Tt A { s ≤ γ j } { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s (cid:12)(cid:12)(cid:12)(cid:12) α/ = (cid:12)(cid:12)(cid:12)(cid:12) Z T A { t ≤ s ≤ γ j } { Y ns > Γ s } e as ( Y ns − Γ s ) + Z ns dB s (cid:12)(cid:12)(cid:12)(cid:12) α/ , multiplying A to (6.20) and taking expectation, we can deduce from the Burkholder-Davis-Gundy inequality and(1.6)12 E A Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ! α/ ≤ C α E " A (cid:16) ( Y nγ j ) + (cid:17) α + A (Ψ nt ) α + A Z T h s ds ! α + A Γ αT + (cid:18) Z T A { t ≤ s ≤ γ j } { Y ns > Γ s } e as (cid:0) ( Y ns − Γ s ) + (cid:1) | Z ns | ds (cid:19) α/ ≤ C α E A + A ( Y nγ j ) + + A (Ψ nt ) α + A Z T h s ds + A Γ T + A (Ψ nt ) α/ · Z Tt { s ≤ γ j } { Y ns > Γ s } e as | Z ns | ds ! α/ ≤ C α E " A + A ( Y nγ j ) + + A (Ψ nt ) α + A Z T h s ds + A Γ T + 14 E A Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ! α/ . Since E (cid:20)(cid:16)R γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds (cid:17) α/ (cid:21) ≤ e αaT j α/ and since E [(Ψ n ) α ] = E " sup t ∈ [0 ,T ] (cid:0) ( Y nt − Γ t ) + (cid:1) α ≤ E " sup t ∈ [0 ,T ] (cid:0) ( Y nt ) + (cid:1) α ≤ k Y n k S α < ∞ , RBSDEs with Integrable Parameters 16letting A vary over F t yields that E Z γ j γ j ∧ t { Y ns > Γ s } e as | Z ns | ds ! α/ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F t ≤ C α E " Y nγ j ) + +(Ψ nt ) α + Z T h s ds +Γ T (cid:12)(cid:12)(cid:12)(cid:12) F t . Then we see from (6.18) that (cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + ≤ e a ( γ j ∧ t ) (cid:16) Y nγ j ∧ t − Γ γ j ∧ t (cid:17) + ≤ C α E " Y nγ j ) + +(Ψ n ) α + Z T h s ds +Γ T (cid:12)(cid:12)(cid:12)(cid:12) F t , P − a.s. (6.21)The uniform integrability of { Y nγ } γ ∈T implies that of (cid:8) ( Y nγ ) + (cid:9) γ ∈T . As Z n ∈ ∩ p ∈ (0 , H ,p ⊂ H , , { γ j } j ∈ N isstationary. So letting j → ∞ in (6.21), one can deduce from the continuity of process Y n that( Y nt ) + ≤ Γ t +( Y nt − Γ t ) + ≤ Γ t + C α E " ξ + +(Ψ n ) α + Z T h s ds +Γ T (cid:12)(cid:12)(cid:12)(cid:12) F t ≤ C α E (cid:2) η +(Ψ n ) α (cid:12)(cid:12) F t (cid:3) , P − a.s.Then claim (6.14) follows from the continuity of process Y n, + and of process (cid:8) E (cid:2) η +(Ψ n ) α (cid:12)(cid:12) F t (cid:3) (cid:9) t ∈ [0 ,T ] . (cid:3) Proof of Proposition 5.2: The proof is relatively lengthy, see our introduction for a sketch. We will defer thedemonstration of some technicalities (those equations with starred labels) to the appendix. (1) For any n ∈ N , K nt := n R t { ν The flat-off condition of reflected BSDEs implies that P − a.s.0 ≤ Z st { Y r >Y r } dK r = Z st { L r = Y r >Y r } dK r ≤ Z st { L r >L r } dK r = 0 , ∀ ≤ t < s ≤ T. It follows that P − a.s. Z st { Y r >Y r } ( dK r − dK r ) = − Z st { Y r >Y r } dK r ≤ , ∀ ≤ t < s ≤ T. Then we can apply Proposition 3.2 over period [0 , T ] with V i = K i , i = 1 , (cid:3) Proof of Theorem 5.1: (1) (existence) For any n ∈ N , we define function g n as in (1.10), which satisfies(H1) − (H5) since L ∈ S . In light of Proposition 3.1, the BSDE( ξ, g n ) admits a unique solution ( Y n , Z n ) ∈ ∩ p ∈ (0 , ( S p × H ,p ) such that Y n is of class (D). Also, Proposition 3.3 shows that for any ω ∈ Ω except on a P − null set N Y nt ( ω ) ≤ Y n +1 t ( ω ) , ∀ t ∈ [0 , T ] , ∀ n ∈ N . (6.51)We can let (6.51) hold for any ω ∈ Ω by setting Y nt ( ω ) := { ω ∈N c } Y nt ( ω ), ( t, ω ) ∈ [0 , T ] × Ω, n ∈ N (cid:16) each modified Y n still belongs to ∩ p ∈ (0 , S p , of class (D) and satisfies BSDE( ξ, g n ) with Z n (cid:17) .Applying Proposition 5.1 with ( Y n , Z n , J n ) = ( Y n , Z n , n ∈ N shows that the limit process Y t := lim n →∞ ↑ Y nt , t ∈ [0 , T ] is an F − predictable process of class (D) satisfying E " sup t ∈ [0 ,T ] | Y t | p < ∞ , ∀ p ∈ (0 , t ∈ [0 ,T ] (cid:0) ( Y t ) − + Y + t (cid:1) ≤ Y ∗ + Y ∗ < ∞ , P − a.s. As Y T = lim n →∞ ↑ Y nT = ξ ≥ L T , P − a.s., applying Proposition 5.2 with( ν, τ ) = (0 , T ) yields that Y ∈ ∩ p ∈ (0 , S p solves RBSDE( ξ, g, L ) with some ( Z, K ) ∈ H , × K . Moreover, applyingLemma A.2 with ( ν, τ ) = (0 , T ) and using H¨older’s inequality show that E "(cid:18) Z T | Z s | ds (cid:19) p/ + E [ K pT ] ≤ C p E [( Y ∗ ) p ] + C p E Z T h t dt ! p < ∞ , ∀ p ∈ (0 , . Namely, ( Z, K ) ∈ ∩ p ∈ (0 , ( H ,p × K p ). (2) (uniqueness) Let ( Y , Z , K ) , ( Y , Z , K ) ∈ ∩ p ∈ (0 , ( S p × H ,p × K p ) be two solutions of RBSDE( ξ, g, L ) suchthat Y , Y is of class (D). We know from Proposition 5.3 that P { Y t = Y t , ∀ t ∈ [0 , T ] } = 1, so it holds P − a.s. that Z Tt g ( s, Y s , Z s ) ds + K T − K t − Z Tt Z s dB s = Z Tt g ( s, Y s , Z s ) ds + K T − K t − Z Tt Z s dB s , t ∈ [0 , T ] . Comparing martingale parts on both side shows that Z t = Z t , dt ⊗ d P − a.s. Then it follows that P − a.s. K t = Y − Y t − Z t g ( s, Y s , Z s ) ds + Z t Z s dB s = Y − Y t − Z t g ( s, Y s , Z s ) ds + Z t Z s dB s = K t , t ∈ [0 , T ] . (3) (proof of (5.1) and (5.2) ) Fix ν ∈ T and γ ∈ T ν,T . We will simply denote τ ♯ ( ν ) by b τ . The uniform integrabilityof { Y γ } γ ∈T implies that Y γ ∈ L ( F γ ), so we see from (A.2) that P − a.s. Y γ,Y γ t = Y γ + Z γt g (cid:0) s, Y γ,Y γ s , Z γ,Y γ s (cid:1) ds − Z γt Z γ,Y γ s dB s , ∀ t ∈ [ ν, γ ] . (6.52)Since it holds P − a.s. that Y t = Y γ + Z γt g (cid:0) s, Y s , Z s (cid:1) ds + K γ − K t − Z γt Z s dB s , ∀ t ∈ [ ν, γ ] , .3 Proof of Theorem 2.1 Y , Z , V ) = (cid:0) Y γ,Y γ , Z γ,Y γ , (cid:1) and ( Y , Z , V ) = ( Y, Z, K ) yields that P − a.s., Y γ,Y γ t ≤ Y t for any t ∈ [ ν, γ ]. In particular, E gν,γ [ Y γ ] = Y γ,Y γ ν ≤ Y ν , P − a.s. (6.53)As Y γ ≥ { γ For n ∈ N , we define function g n as in (1.10) which satisfies (H1) − (H5) since L ∈ S . Theorem 5.1 and Remark5.2 show that the following reflected BSDE with generator g n and upper obstacle U U t ≥ Y t = ξ + Z Tt g n ( s, Y s , Z s ) ds − J T + J t − Z Tt Z s dB s , t ∈ [0 , T ] , Z T ( U t − Y t ) dJ t = 0 . (6.56)admits a unique solution ( Y n , Z n , J n ) ∈ ∩ p ∈ (0 , ( S p × H ,p × K p ) such that Y n is of class (D). In light of Proposition5.3 and Remark 5.2, it holds for any ω ∈ Ω except on a P − null set N that Y nt ( ω ) ≤ Y n +1 t ( ω ) , ∀ t ∈ [0 , T ] , ∀ n ∈ N . (6.57)We can let (6.57) hold for any ω ∈ Ω by setting Y nt ( ω ) := { ω ∈N c } Y nt ( ω ), ( t, ω ) ∈ [0 , T ] × Ω, n ∈ N (cid:16) each modified Y n still belongs to ∩ p ∈ (0 , S p , of class (D) and satisfies (6.56) with ( Z n , J n ) (cid:17) . By Proposition 5.1, the limit process Y t := lim n →∞ ↑ Y nt , t ∈ [0 , T ] is an F − predictable process of class (D) that satisfies E " sup t ∈ [0 ,T ] | Y t | p < ∞ , ∀ p ∈ (0 , . (6.58)RBSDEs with Integrable Parameters 22Let ν ∈ T . For any n ∈ N , define a stopping time γ nν := inf { t ∈ [ ν, T ] : Y nt = U t }∧ T ∈ T . As it holds P − a.s. that Y nt < U t for any t ∈ (cid:2) ν, γ nν (cid:1) , we can deduce from the flat-off condition in (6.56) that P (cid:8) J nt = J nν , ∀ t ∈ [ ν, γ nν ] (cid:9) = 1. Itthen follows that P − a.s. 0 = J nγ nν − J nt = Y nγ nν − Y nt + Z γ nν t g n ( s, Y ns , Z ns ) ds − Z γ nν t Z ns dB s . (6.59)Clearly, γ nν is decreasing in n , and their limit γ ν := lim n →∞ ↓ γ nν ≥ ν is still a stopping time thanks to the right continuityof filtration F . We claim that Y γ ν = { γ ν = T } ξ + { γ ν 1) and since it holds P − a.s. that Y nt = Y nγ ν + Z γ ν t g n ( s, Y ns , Z ns ) ds − Z γ ν t Z ns dB s , ∀ t ∈ [ ν, γ ν ] (6.61)for any n ∈ N by (6.59), applying Proposition 5.2 to (cid:8) ( Y n , Z n ) (cid:9) n ∈ N yields that process (cid:8) Y ν ∨ ( γ ν ∧ t ) (cid:9) t ∈ [0 ,T ] has P − a.s. continuous paths and there exist ( Z ν , K ν ) ∈ H , × K such that P − a.s. L t ≤ Y t = Y γ ν + Z γ ν t g ( s, Y s , Z νs ) ds + K νγ ν − K νt − Z γ ν t Z νs dB s , ∀ t ∈ [ ν, γ ν ] , Z γ ν ν ( Y t − L t ) dK νt = 0 . (6.62)Since E [ | Y ν | ] < ∞ by the uniform integrability of { Y ζ } ζ ∈T , Lemma A.2, H¨older’s inequality and (6.58) show that E "(cid:18)Z τν | Z t | dt (cid:19) p/ ≤ C p E (cid:20) sup t ∈ [ ν,τ ] | Y t | p (cid:21) + C p (cid:18) E Z τν h t dt (cid:19) p < ∞ , ∀ p ∈ (0 , . (6.63) (1b) (cid:0) decreasing penalization scheme (cid:1) Similar to g L discussed in Remark 1.3 (4), g U ( t, ω, y ) := ( y − U t ( ω )) + , ( t, ω, y ) ∈ [0 , T ] × Ω × R is clearly a P ⊗ B ( R ) / B ( R ) − measurable function satisfying (H2) − (H4). For any n ∈ N , we see from Remark 1.3 (3) that e g n ( t, ω, y, z ) := g ( t, ω, y, z ) − n ( y − U t ( ω )) + , ∀ ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d defines a generator, and Theorem 5.1 shows that RBSDE (cid:0) ξ, e g n , L (cid:1) admits a unique solution (cid:0) e Y n , e Z n , e K n (cid:1) ∈ ∩ p ∈ (0 , ( S p × H ,p × K p ) such that e Y n is of class (D). Since e g n is decreasing in n , Proposition 5.3 shows that P − a.s. e Y nt ≥ e Y n +1 t , ∀ t ∈ [0 , T ] , ∀ n ∈ N . (6.64)As in (6.57), we can assume that (6.64) holds everywhere on Ω.Set (cid:0)e L, e U (cid:1) := ( − U, − L ) ∈ S × S − . For any n ∈ N , (cid:0) b Y n , b Z n , b J n (cid:1) := (cid:0) − e Y n , − e Z n , − e K n (cid:1) satisfies that P − a.s. e U t = − L t ≥ b Y nt = − ξ − Z Tt g (cid:0) s, e Y ns , e Z ns (cid:1) ds + n Z Tt (cid:0) e Y ns − U s (cid:1) + ds − e K nT + e K nt + Z Tt e Z ns dB s = b Y nT + Z Tt g − (cid:0) s, b Y ns , b Z ns (cid:1) ds + n Z Tt (cid:0) b Y ns − e L s (cid:1) − ds + b J nT − b J nt − Z Tt b Z ns dB s , t ∈ [0 , T ] . (6.65)Since g − is a generator by Remark 1.3 (1), applying Proposition 5.1 to n(cid:0) b Y n , b Z n , b J n (cid:1)o n ∈ N yields that b Y t := lim n →∞ ↑ b Y nt , t ∈ [0 , T ] is an F − predictable process of class (D) that satisfies E " sup t ∈ [0 ,T ] (cid:12)(cid:12) b Y t (cid:12)(cid:12) p < ∞ , ∀ p ∈ (0 , .3 Proof of Theorem 2.1 ν ∈ T . The stopping times τ nν := inf (cid:8) t ∈ [ ν, T ] : b Y nt = e U t (cid:9) ∧ T = inf (cid:8) t ∈ [ ν, T ] : e Y nt = L t (cid:9) ∧ T ∈ T is decreasing in n . Analogous to (6.60), τ ν := lim n →∞ ↓ τ nν ≥ ν is still a stopping time that satisfies b Y τ ν = − { τ ν = T } ξ + { τ ν 0) locallysolves the doubly reflected BSDE over the stochastic interval [[ ν, γ ν ]] and ( Y, e Z ν , , e J ν ) = ( e Y , e Z ν , , e J ν ) locally solvesthe doubly reflected BSDE over the stochastic interval [[ ν, τ ν ]]. (1d) (cid:0) construction of a solution via pasting (cid:1) For any n ∈ N and t ∈ [0 , T ], set I nt := [( t − − n ) ∨ , ( t + 2 − n ) ∧ T ]. Similar to (A.19), we can deduce from thecontinuity of Y n ’s, e Y n ’s and (6.70) that P − a.s.lim n →∞ ↑ inf s ∈I nt Y s = lim n →∞ ↑ inf s ∈I nt lim m →∞ ↑ Y ms ≥ lim m →∞ ↑ lim n →∞ ↑ inf s ∈I nt Y ms = lim m →∞ ↑ Y mt = Y t = e Y t = lim m →∞ ↓ e Y mt = lim m →∞ ↓ lim n →∞ ↓ sup s ∈I nt e Y ms ≥ lim n →∞ ↓ sup s ∈I nt lim m →∞ ↓ e Y ms = lim n →∞ ↓ sup s ∈I nt e Y s = lim n →∞ ↓ sup s ∈I nt Y s ≥ lim n →∞ ↑ inf s ∈I nt Y s , ∀ t ∈ [0 , T ] , which shows that Y is a continuous process. So Y ∈ ∩ p ∈ (0 , S p by (6.58).Let ν := 0, we recursively set stopping times ν ′ ℓ := γ ν ℓ , ν ℓ +1 := τ ν ′ ℓ , ℓ ∈ N , and define processes Z t := X ℓ ∈ N { ν ℓ 2. Similar to (6.48), we can deduce from (6.69), (6.62) and (6.70) that P − a.s. K t − J t = ℓ − X i =1 (cid:16) K ν i ν ′ i ∧ t − K ν i ν i ∧ t (cid:17) − ℓ − X i =1 (cid:16) e J ν ′ i ν i +1 ∧ t − e J ν ′ i ν ′ i ∧ t (cid:17) = ℓ − X i =1 − Y ν ′ i ∧ t + Y ν i ∧ t − Z ν ′ i ∧ tν i ∧ t g (cid:0) s, Y s , Z ν i s (cid:1) ds + Z ν ′ i ∧ tν i ∧ t Z ν i s dB s − e Y ν i +1 ∧ t + e Y ν ′ i ∧ t − Z ν i +1 ∧ tν ′ i ∧ t g (cid:0) s, e Y s , e Z ν ′ i s (cid:1) ds + Z ν i +1 ∧ tν ′ i ∧ t e Z ν ′ i s dB s ! = ℓ − X i =1 (cid:18) − Y ν i +1 ∧ t + Y ν i ∧ t − Z ν i +1 ∧ tν i ∧ t g (cid:0) s, Y s , Z s (cid:1) ds + Z ν i +1 ∧ tν i ∧ t Z s dB s (cid:19) = − Y t + Y − Z t g (cid:0) s, Y s , Z s (cid:1) ds + Z t Z s dB s , ∀ t ∈ [0 , ν ℓ ] . It follows that P − a.s. Y t = Y ν ℓ + Z ν ℓ t g (cid:0) s, Y s , Z s (cid:1) ds + K ν ℓ − K t − J ν ℓ + J t − Z ν ℓ t Z s dB s , ∀ t ∈ [0 , ν ℓ ] . (6.77)Since the increment of K over [ ν i , ν ′ i ] is that of K ν i over [ ν i , ν ′ i ] ( K is constant over [ ν ′ i , ν i +1 ]) and since the incrementof J over [ ν ′ i , ν i +1 ] is that of J ν ′ i over [ ν ′ i , ν i +1 ] ( J is constant over [ ν i , ν ′ i ]), (6.69), (6.62) and (6.70) again imply that Z ν ℓ ( Y t − L t ) dK t = ℓ − X i =1 Z ν ′ i ν i ( Y t − L t ) dK t = ℓ − X i =1 Z ν ′ i ν i ( Y t − L t ) dK ν i t = 0 , (6.78)and Z ν ℓ ( U t − Y t ) dJ t = Z ν ℓ (cid:0) U t − e Y t (cid:1) dJ t = ℓ − X i =1 Z ν i +1 ν ′ i (cid:0) U t − e Y t (cid:1) dJ t = ℓ − X i =1 Z ν i +1 ν ′ i (cid:0) U t − e Y t (cid:1) dJ ν ′ i t = 0 , P − a.s. (6.79)Clearly, Y T = lim n →∞ ↑ Y nT = ξ , P − a.s. Letting ℓ → ∞ in (6.77), (6.78) and (6.79), we see from (6.75) and (6.70) that( Y, Z, K, J ) solves DRBSDE( ξ, g, L, U ). (2) (proof of (2.1) − (2.3) ) Fix ν ∈ T . We will simply denote τ ∗ ν by b τ and γ ∗ ν by b γ . Since it holds P − a.s. that Y t > L t , ∀ t ∈ (cid:2) ν, b τ (cid:1) and Y t < U t , ∀ t ∈ (cid:2) ν, b γ (cid:1) , the flat-off conditions in DRBSDE( ξ, g, L, U ) implies that P − a.s. K t = K ν , ∀ t ∈ [ ν, b τ ] and J t = J ν , ∀ t ∈ (cid:2) ν, b γ (cid:3) . (6.80)Let τ, γ ∈ T ν,T , we see from (6.80) that P − a.s. Y t = Y b τ ∧ γ + Z b τ ∧ γt g ( s, Y s , Z s ) ds − J b τ ∧ γ + J t − Z b τ ∧ γt Z s dB s , ∀ t ∈ (cid:2) ν, b τ ∧ γ (cid:3) . As Y b τ ∧ γ ∈ L ( F b τ ∧ γ ) by the uniform integrability of { Y γ ′ } γ ′ ∈T , (A.2) shows that P − a.s. Y b τ ∧ γ,Y b τ ∧ γ t = Y b τ ∧ γ + Z b τ ∧ γt g (cid:16) s, Y b τ ∧ γ,Y b τ ∧ γ s , Z b τ ∧ γ,Y b τ ∧ γ s (cid:17) ds − Z b τ ∧ γt Z b τ ∧ γ,Y b τ ∧ γ s dB s , ∀ t ∈ (cid:2) ν, b τ ∧ γ (cid:3) . (6.81)Applying Proposition 3.2 with ( Y , Z , V ) = ( Y, Z, − J ) and ( Y , Z , V ) = (cid:16) Y b τ ∧ γ,Y b τ ∧ γ , Z b τ ∧ γ,Y b τ ∧ γ , (cid:17) yields that P − a.s., Y t ≤ Y b τ ∧ γ,Y b τ ∧ γ t for any t ∈ (cid:2) ν, b τ ∧ γ (cid:3) . It follows that Y ν ≤ Y b τ ∧ γ,Y b τ ∧ γ ν = E gν, b τ ∧ γ (cid:2) Y b τ ∧ γ (cid:3) , P − a.s. (6.82)Similarly, we can deduce that Y ν ≥ E gν,τ ∧ b γ (cid:2) Y τ ∧ b γ (cid:3) , P − a.s. , (6.83)proving (2.1).RBSDEs with Integrable Parameters 26The continuity of processes Y , L and U implies that { b τ Lemma A.1. Given ξ ∈ L ( F τ ) , let ξ ∈ L ( F τ ) and let g ( t, ω, y, z ) = { t ≤ τ ( ω ) } g ( t, ω, y, z ) , ( t, ω, y, z ) ∈ [0 , T ] × Ω × R × R d be a generator. Then one has P (cid:8) Y τ,ξt = Y τ,ξτ ∧ t , ∀ t ∈ [0 , T ] (cid:9) = 1 and Z τ,ξt = { t ≤ τ } Z τ,ξt , dt ⊗ d P − a.s. (A.1) (cid:0) see (4.1) for the notation ( Y τ,ξ , Z τ,ξ ) (cid:1) . In particular, it holds P − a.s. that Y τ,ξt = ξ + Z τt g (cid:0) s, Y τ,ξs , Z τ,ξs (cid:1) ds − Z τt Z τ,ξs dB s , ∀ t ∈ [0 , τ ] . (A.2) Proof: Given n ∈ N , we define a stopping time γ n := inf (cid:26) t ∈ [0 , T ] : Z t (cid:12)(cid:12) Z τ,ξs (cid:12)(cid:12) ds > n (cid:27) ∧ T ∈ T . (A.3)Since Y τ,ξτ ∧ γ n = Y τ,ξγ n + R γ n τ ∧ γ n { s ≤ τ } g (cid:0) s, Y τ,ξs , Z τ,ξs (cid:1) ds − R γ n τ ∧ γ n Z τ,ξs dB s = Y τ,ξγ n − R γ n τ ∧ γ n Z τ,ξs dB s , P − a.s., taking conditionalexpectation E (cid:2) · |F τ ∧ γ n (cid:3) yields that P − a.s. Y τ,ξτ ∧ γ n = E (cid:2) Y τ,ξγ n (cid:12)(cid:12) F τ ∧ γ n (cid:3) = { τ ≤ γ n } E (cid:2) Y τ,ξγ n (cid:12)(cid:12) F τ (cid:3) + { τ>γ n } E (cid:2) Y τ,ξγ n (cid:12)(cid:12) F γ n (cid:3) = { τ ≤ γ n } E (cid:2) Y τ,ξγ n (cid:12)(cid:12) F τ (cid:3) + { τ>γ n } Y τ,ξγ n . (A.4)As Z τ,ξ ∈ ∩ p ∈ (0 , H ,p ⊂ H , , { γ n } n ∈ N is stationary. Letting n → ∞ , we can deduce from the uniform integrabilityof (cid:8) Y τ,ξγ (cid:9) γ ∈T that Y τ,ξτ = { τ ≤ T } E (cid:2) Y τ,ξT (cid:12)(cid:12) F τ (cid:3) + { τ>T } Y τ,ξT = E (cid:2) Y τ,ξT (cid:12)(cid:12) F τ (cid:3) = E (cid:2) ξ (cid:12)(cid:12) F τ (cid:3) = ξ, P − a.s.Then it follows that P − a.s. Y τ,ξτ ∧ t = Y τ,ξτ + Z ττ ∧ t { s ≤ τ } g (cid:0) s, Y τ,ξs , Z τ,ξs (cid:1) ds − Z ττ ∧ t Z τ,ξs dB s (A.5)= ξ + Z Tt { s ≤ τ } g (cid:0) s, Y τ,ξτ ∧ s , { s ≤ τ } Z τ,ξs (cid:1) ds − Z Tt { s ≤ τ } Z τ,ξs dB s , t ∈ [0 , T ] . which shows that (cid:8)(cid:0) Y τ,ξτ ∧ t , { t ≤ τ } Z τ,ξt (cid:1)(cid:9) t ∈ [0 ,T ] also solves BSDE( ξ, g τ ). Clearly, (cid:8) Y τ,ξτ ∧ t (cid:9) t ∈ [0 ,T ] is an F − adapted con-tinuous process such that E " sup t ∈ [0 ,T ] (cid:12)(cid:12) Y τ,ξτ ∧ t (cid:12)(cid:12) p ≤ E " sup t ∈ [0 ,T ] (cid:12)(cid:12) Y τ,ξt (cid:12)(cid:12) p < ∞ for any p ∈ (0 , 1) and that (cid:8) Y τ,ξγ (cid:9) γ ∈T ,τ isuniformly integrable. As (cid:8) { t ≤ τ } (cid:9) t ∈ [0 ,T ] is an F − adapted c`agl`ad process (cid:0) and thus F − predictable (cid:1) , we see that (cid:8) { t ≤ τ } Z τ,ξt (cid:9) t ∈ [0 ,T ] is an F − predictable process satisfying E (cid:20)(cid:16)R T { t ≤ τ } | Z τ,ξt | (cid:17) p/ (cid:21) ≤ E (cid:20)(cid:16)R T | Z τ,ξt | (cid:17) p/ (cid:21) < ∞ forany p ∈ (0 , ξ, g τ ), (A.1) holds.Moreover, (A.5) can be alternatively expressed as: P − a.s. Y τ,ξτ ∧ t = ξ + Z ττ ∧ t g (cid:0) s, Y τ,ξs , Z τ,ξs (cid:1) ds − Z ττ ∧ t Z τ,ξs dB s , t ∈ [0 , T ] , which leads to (A.2). (cid:3) Lemma A.2. Let g : [0 , T ] × Ω × R × R d → R be a P ⊗ B ( R ) ⊗ B ( R d ) / B ( R ) − measurable function satisfying (H1) and (H4) . Given ν, τ ∈ T with ν ≤ τ , let ( Y, Z, K ) ∈ S × e H , × K satisfy that P − a.s. Y t = Y τ + Z τt g ( s, Y s , Z s ) ds + K τ − K t − Z τt Z s dB s , ∀ t ∈ [ ν, τ ] . (A.6) If E (cid:2) | Y ν | (cid:3) < ∞ , then for any p ∈ (0 , ∞ ) , E h(cid:0)R τν | Z t | dt (cid:1) p/ i + E (cid:2) ( K τ − K ν ) p (cid:3) ≤ C p E (cid:20) sup t ∈ [ ν,τ ] | Y t | p (cid:21) + C p E h (cid:0)R τν h t dt (cid:1) p i . RBSDEs with Integrable Parameters 28 Proof: Let E (cid:2) | Y ν | (cid:3) < ∞ and fix p ∈ (0 , ∞ ). By the Burkholder-Davis-Gundy inequality, there exists c p > M E [( M ∗ ) p ] ≤ c p E h h M i p/ T i and E h ( M ∗ ) p/ i ≤ c p E h h M i p/ T i . (A.7)Set Ψ := sup t ∈ [ ν,τ ] | Y t | and suppose E [Ψ p ] < ∞ , otherwise the result trivially holds. We let n ∈ N and define a stoppingtime τ n := inf { t ∈ [ ν, τ ] : R tν | Z s | ds > n }∧ τ ∈ T . It is clear that ν ≤ τ n ≤ τ . Since (H1), (H4) and H¨older’s inequalityimply that K τ n − K ν = Y ν − Y τ n − Z τ n ν g ( t, Y t , Z t ) dt + Z τ n ν Z t dB t ≤ Z τ n ν ( h t + κ | Y t | + κ | Z t | ) dt + (cid:12)(cid:12)(cid:12)(cid:12) Z T { ν ≤ s ≤ τ n } Z t dB t (cid:12)(cid:12)(cid:12)(cid:12) ≤ (2+ κT )Ψ+ Z τν h t dt + κ √ T (cid:18)Z τ n ν | Z t | dt (cid:19) / + sup t ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)(cid:12) Z t { ν ≤ s ≤ τ n } Z s dB s (cid:12)(cid:12)(cid:12)(cid:12) , P − a.s. , taking the expectation of p − th power, we can deduce from (1.5) and (A.7) that E h(cid:0) K τ n − K ν (cid:1) p i ≤ (1 ∨ p − ) ( (2+ κT ) p E [Ψ p ]+ E (cid:20)(cid:18)Z τν h t dt (cid:19) p (cid:21) + (cid:16) κ p T p/ + c p (cid:17) E "(cid:18) Z τ n ν | Z t | dt (cid:19) p/ . (A.8)As E (cid:2) | Y ν | (cid:3) < ∞ , Corollary 3.1 implies that there exists a unique e Z ∈ ∩ p ∈ (0 , H ,p such that P (cid:8) E [ Y ν |F t ] = E [ Y ν ]+ R t e Z s dB s , ∀ t ∈ [0 , T ] (cid:9) = 1. Similar to (6.3), (A.6) shows that P − a.s. e Y t := E [ Y ν |F ν ∧ t ]+ Y ν ∨ ( τ ∧ t ) − Y ν = E [ Y ν ] − Z t { ν 1+ 2 δ (cid:17) e aT Ψ +2 e aT Ψ · Z τν h s ds +2( κ + κ ) Z τ n ν e as | Y s | ds + 12 Z τ n ν e as | Z s | ds + δ K τ n − K ν ) +2 (cid:12)(cid:12)(cid:12)(cid:12) Z T { ν ≤ s ≤ τ n } e as Y s Z s dB s (cid:12)(cid:12)(cid:12)(cid:12) . . Appendix P − a.s. Z τ n ν | Z t | dt ≤ Z τ n ν e as | Z t | dt ≤ (cid:16) 4+ 4 δ (cid:17) e aT Ψ +2 (cid:18)Z τν h t dt (cid:19) + δ ( K τ n − K ν ) +4 sup t ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)(cid:12) Z t { ν ≤ s ≤ τ n } e as Y s Z s dB s (cid:12)(cid:12)(cid:12)(cid:12) . Taking the expectation of p/ − th power, we can deduce from (1.5) and (A.8) that E "(cid:18)Z τ n ν | Z t | dt (cid:19) p/ ≤ (1 ∨ p/ − ) ((cid:16) 4+ 4 δ (cid:17) p/ e apT E [Ψ p ]+2 p/ E (cid:20)(cid:18)Z τν h t dt (cid:19) p (cid:21) + δ p/ E [( K τ n − K ν ) p ]+4 p/ c p E "(cid:18)Z τ n ν e at | Y t | | Z t | dt (cid:19) p/ ≤ C p E [Ψ p ] + C p E (cid:20)(cid:18)Z τν h t dt (cid:19) p (cid:21) + 13 E "(cid:18)Z τ n ν | Z t | dt (cid:19) p/ + C p E " (Ψ) p/ (cid:18)Z τ n ν | Z t | dt (cid:19) p/ ≤ C p E [Ψ p ] + C p E (cid:20)(cid:18)Z τν h t dt (cid:19) p (cid:21) + 23 E "(cid:18)Z τ n ν | Z t | dt (cid:19) p/ . So E h(cid:0)R τ n ν | Z t | dt (cid:1) p/ i ≤ C p E [Ψ p ] + C p E (cid:2)(cid:0)R τν h t dt (cid:1) p (cid:3) , which together with (A.8) shows that E "(cid:18)Z τ n ν | Z t | dt (cid:19) p/ + E h(cid:0) K τ n − K ν (cid:1) p i ≤ C p E [Ψ p ] + C p E (cid:20)(cid:18)Z τν h t dt (cid:19) p (cid:21) . (A.11)As Z ∈ e H , , it holds for P − a.s. ω ∈ Ω that τ ( ω ) = τ N ω ( ω ) for some N ω ∈ N . Then letting n → ∞ in (A.11), we canapply the monotone convergence theorem to obtain the conclusion. (cid:3) Lemma A.3. Let X be an F − optional process with P − a.s. right upper semi-continuous paths (cid:0) i.e., for any ω ∈ Ω except a P − null set N X , X t ≥ lim s ց t X s , ∀ t ∈ [0 , T ) (cid:1) . If X ν ≤ X e ν , P − a.s. for any ν, e ν ∈ T with ν ≤ e ν , P − a.s., then X is an increasing process. Proof: Set D k := (cid:8) t ki := i k ∧ T (cid:9) ⌈ k T ⌉ i =0 , ∀ k ∈ N and D := ∪ k ∈ N D k . Given t ∈ [0 , T ), we define X t := lim n →∞ ↑ inf s ∈ Θ nt X s ,where Θ nt := D ∩ ( t, ( t + 2 − n ) ∧ T ]. Clearly,Θ nt = ∪ k>n Θ n,kt , where Θ n,kt := D k ∩ (cid:0) t, ( t + 2 − n ) ∧ T (cid:3) . (A.12)For any m, n ∈ N with m < n , since Θ nt is a countable subset of ( t, ( t + 2 − n ) ∧ T ], the random variable inf s ∈ Θ nt X s is clearly F ( t +2 − n ) ∧ T − measurable. So X t = lim n →∞ n>m ↑ inf s ∈ Θ nt X s ∈ F ( t +2 − m ) ∧ T . As m → ∞ , the right-continuity of thefiltration F shows that X t ∈ ∩ m ∈ N F ( t +2 − m ) ∧ T = F t + = F t . (A.13) (1) Additionally setting X T := X T ∈ F T , we first show the process X is F − progressively measurable. For any t ∈ [0 , T ), c ∈ R and n, k ∈ N with k > n , since it holds for i = 0 , · · · , ⌊ k t ⌋ and any s ∈ [ t ki , t ki +1 ) ∩ [0 , t ] thatΘ n,ki := Θ n,kt ki = { t kj : j = i + 1 , · · · , i + 2 k − n } = Θ n,ks ⊂ (cid:0) s, ( s + 2 − n ) ∧ T (cid:3) ⊂ (cid:0) , ( t + 2 − n ) ∧ T (cid:3) , we can deduce that n ( s, ω ) ∈ [0 , t ] × Ω : min r ∈ Θ n,ks X r ( ω ) ≥ c o = ⌊ k t ⌋ ∪ i =0 n ( s, ω ) ∈ (cid:0) [ t ki , t ki +1 ) ∩ [0 , t ] (cid:1) × Ω : min r ∈ Θ n,ks X r ( ω ) ≥ c o = ⌊ k t ⌋ ∪ i =0 n ( s, ω ) ∈ (cid:0) [ t ki , t ki +1 ) ∩ [0 , t ] (cid:1) × Ω : min r ∈ Θ n,ki X r ( ω ) ≥ c o = ⌊ k t ⌋ ∪ i =0 ∩ r ∈ Θ n,ki n ( s, ω ) ∈ (cid:0) [ t ki , t ki +1 ) ∩ [0 , t ] (cid:1) × Ω : X r ( ω ) ≥ c o = ⌊ k t ⌋ ∪ i =0 ∩ r ∈ Θ n,ki (cid:0) [ t ki , t ki +1 ) ∩ [0 , t ] (cid:1) ×{ X r ≥ c } ∈ B ([0 , t ]) ⊗F ( t +2 − n ) ∧ T . (A.14)RBSDEs with Integrable Parameters 30Now, let e t ∈ [0 , T ] and e c ∈ R . If e t = 0, then (A.13) shows that (cid:8) ( s, ω ) ∈ (cid:2) , e t (cid:3) × Ω : X s ( ω ) > e c (cid:9) = { }×{ X > e c } ∈ B ( { } ) ⊗F ; if e t > 0, for any m > m := l − ln e t ln 2 m , we can deduce from (A.14) and (A.12) that (cid:8) ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : X s ( ω ) > e c (cid:9) = n ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : lim n →∞ n>m ↑ inf r ∈ Θ ns X r ( ω ) > e c o = ∪ n>m n ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : inf r ∈ Θ ns X r ( ω ) > e c o = ∪ n>m ∪ ℓ ∈ N n ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : inf r ∈ Θ ns X r ( ω ) ≥ e c +1 /ℓ o = ∪ n>m ∪ ℓ ∈ N ∩ k>n n ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : min r ∈ Θ n,ks X r ( ω ) ≥ e c +1 /ℓ o ∈ B (cid:0)(cid:2) , e t − − m (cid:3)(cid:1) ⊗F e t , which together with (A.13) shows that (cid:8) ( s, ω ) ∈ (cid:2) , e t (cid:3) × Ω : X s ( ω ) > e c (cid:9) = n ( s, ω ) ∈ (cid:18) ∪ m>m (cid:2) , e t − − m (cid:3)(cid:19) × Ω : X s ( ω ) > e c o ∪ (cid:8) ( s, ω ) ∈ (cid:8)e t (cid:9) × Ω : X s ( ω ) > e c (cid:9) = (cid:18) ∪ m>m (cid:8) ( s, ω ) ∈ (cid:2) , e t − − m (cid:3) × Ω : X s ( ω ) > e c (cid:9)(cid:19) ∪ (cid:0)(cid:8)e t (cid:9) × (cid:8) X e t > e c (cid:9)(cid:1) ∈ B (cid:0)(cid:2) , e t (cid:3)(cid:1) ⊗F e t . So Λ := n E ⊂ R : (cid:8) ( s, ω ) ∈ (cid:2) , e t (cid:3) × Ω : X s ( ω ) ∈ E (cid:9) ∈ B (cid:0)(cid:2) , e t (cid:3)(cid:1) ⊗ F e t o contains all open sets of form ( e c, ∞ ), whichgenerates B ( R ). Clearly, Λ is a σ − field of R . It follows that B ( R ) ⊂ Λ, i.e. (cid:8) ( s, ω ) ∈ (cid:2) , e t (cid:3) × Ω : X s ( ω ) ∈ E (cid:9) ∈ B (cid:0)(cid:2) , e t (cid:3)(cid:1) ⊗F e t for any E ∈ B ( R ). Hence, X is F − progressively measurable. (2) Fix ℓ ∈ N . Since both X and X are F − progressively measurable, the Debut theorem shows that τ ℓ := inf { t ∈ [0 , T ] : X t ≤ X t − /ℓ } ∧ T. defines a stopping time, i.e. τ ℓ ∈ T . We claim that A ℓ := { τ ℓ < T } ∈ F T is a P − null set: Assume not , so A ℓ \N X isnot empty. Let ω ∈ A ℓ \N X and set s := τ ℓ ( ω ). there exists { s i } i ∈ N ⊂ [ s, T ) with lim i →∞ ↓ s i = s such that X s i ( ω ) ≤ X s i ( ω ) − /ℓ, ∀ i ∈ N . (A.15)Given m ∈ N , we can find some b i = b i ( m ) ∈ N and b n = b n ( m ) ≥ m such that for any i ≥ b i and n ≥ b n ,( s i , ( s i + 2 − n ) ∧ T ] ⊂ ( s, ( s + 2 − m ) ∧ T ] and thusΘ ns i = (cid:18) ∪ k>n D k (cid:19) ∩ (cid:0) s i , ( s i + 2 − n ) ∧ T (cid:3) ⊂ (cid:18) ∪ k>m D k (cid:19) ∩ (cid:0) s, ( s + 2 − m ) ∧ T (cid:3) = Θ ms . It follows that inf r ∈ Θ ms X r ( ω ) ≤ inf r ∈ Θ nsi X r ( ω ). Letting n → ∞ , we see that inf r ∈ Θ ms X r ( ω ) ≤ X s i ( ω ). As i → ∞ , (A.15) andthe right upper semi-continuity of X · ( ω ) imply thatinf r ∈ Θ ms X r ( ω ) ≤ lim i →∞ X s i ( ω ) ≤ lim i →∞ X s i ( ω ) − /ℓ ≤ lim r ց s X r ( ω ) − /ℓ ≤ X s ( ω ) − /ℓ. Now, letting m → ∞ yields that X s ( ω ) ≤ X s ( ω ) − /ℓ , which shows that X τ ℓ ≤ X τ ℓ − /ℓ on A ℓ \N X . (A.16)The F − optional measurability of X implies that of the stopped process (cid:8) X τ ℓ ∧ t (cid:9) t ∈ [0 ,T ] (see e.g. Corollary 3.24of [33]), so X ℓt := { X τℓ ∧ t ≤ X t } , t ∈ [0 , T ] is also an F − optional process. Since X ℓν = { X τℓ ∧ ν ≤ X ν } = 1, P − a.s. for any ν ∈ T , the cross-section theorem (see Theorem IV.86 of [16]) shows that for any ω ∈ Ω except on a P − null set N ℓ , X ℓt ( ω ) = 1 or ( X τ ℓ ∧ t ) ( ω ) ≤ X t ( ω ) , ∀ t ∈ [0 , T ] . (A.17)Let ω ∈ A ℓ \ ( N X ∪ N ℓ ). As X ( τ ℓ ( ω ) , ω ) ≤ X ( t, ω ), ∀ t ∈ [ τ ℓ ( ω ) , T ] by (A.17), we can deduce from (A.16) that X ( τ ℓ ( ω ) , ω ) ≤ X ( τ ℓ ( ω ) , ω ) ≤ X ( τ ℓ ( ω ) , ω ) − /ℓ. . Appendix P ( A ℓ ) = P { X t ≤ X t − /ℓ for some t ∈ [0 , T ) } . Letting ℓ → ∞ yields that P { X t < X t , for some t ∈ [0 , T ) } = lim ℓ →∞ ↑ P { X t ≤ X t − /ℓ for some t ∈ [0 , T ) } = 0, which together with the rightupper semi-continuity of X shows that except on a P − null set N X t ≥ X t ≥ lim s ց t X s = lim n →∞ ↓ sup s ∈ ( t, ( t +2 − n ) ∧ T ] X s ≥ lim n →∞ ↓ sup s ∈ Θ nt X s ≥ lim n →∞ ↑ inf s ∈ Θ nt X s = X t , ∀ t ∈ [0 , T ) . To wit, it holds for any ω ∈ N c that X t ( ω ) = lim s ց ts ∈D∩ ( t,T ] X s ( ω ) , ∀ t ∈ [0 , T ) . (A.18)Set e N := N ∪ (cid:18) ∪ s,s ′ ∈D ,s The continuity of Y n ’s implies that for P − a.s. ω ∈ Ωlim s ց t Y s ( ω ) = lim n →∞ ↑ inf s ∈ ( t, ( t +2 − n ) ∧ T ] Y s ( ω ) = lim n →∞ ↑ inf s ∈ ( t, ( t +2 − n ) ∧ T ] lim m →∞ ↑ Y ms ( ω ) ≥ lim m →∞ ↑ lim n →∞ ↑ inf s ∈ ( t, ( t +2 − n ) ∧ T ] Y ms ( ω )= lim m →∞ ↑ lim s ց t Y ms ( ω ) = lim m →∞ ↑ Y mt ( ω ) = Y t ( ω ) , ∀ t ∈ (cid:2) ν ( ω ) , τ ( ω ) (cid:1) , (A.19)which shows that the process (cid:8) Y ν ∨ ( τ ℓ ∧ t ) (cid:9) t ∈ [0 ,T ] has P − a.s. right lower semi-continuous paths. It then follows from(6.32) that e K ℓ has P − a.s. right upper semi-continuous paths. (2) We next show that e K ℓγ is a weak limit of (cid:8) K nτ ℓ ∧ γ (cid:9) n ∈ N in L ( F T ) for any γ ∈ T . Let χ ∈ L ( F T ). In virtue of martingale representation theorem, there exists a unique Z χ ∈ H , such that P − a.s. M χt := E [ χ |F t ] = E [ χ ] + Z t Z χs dB s , ∀ t ∈ [0 , T ] . Set ζ = ζ ℓ := ν ∨ ( τ ℓ ∧ γ ) ∈ T and let n ∈ N . We define Υ ℓ,nt := K nν ∨ ( ζ ∧ t ) + Y ℓ,nν ∨ ( ζ ∧ t ) − Y ℓ,nν − (cid:16) e K ℓν ∨ ( ζ ∧ t ) + Y ν ∨ ( ζ ∧ t ) − Y ν (cid:17) , t ∈ [0 , T ]. As K nν = 0 by (6.22), one can deduce from (6.28) that P − a.s.Υ ℓ,nt = − Z ν ∨ ( ζ ∧ t ) ν (cid:16) g ( s, Y ℓ,ns , Z ns ) − g ( s, Y s , − e h ℓs (cid:17) ds + Z ν ∨ ( ζ ∧ t ) ν (cid:0) Z ns −Z ℓs (cid:1) dB s = − Z t { ν 1+ sup s ∈ [0 ,T ] | Y s | p + sup s ∈ [0 ,T ] | Y s | p + Z T h t dt . (A.32)Let j ∈ N and define a stopping time ζ nj := inf (cid:8) t ∈ [0 , T ] : R t | Z ns | ds > j (cid:9) ∧ T ∈ T . Since (6.59) shows that Y γ ν ∧ ζ nj ≥ Y nγ ν ∧ ζ nj = Y nγ nν ∧ ζ nj + Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( s, Y ns , Z ns ) ds + K nγ nν ∧ ζ nj − K nγ ν ∧ ζ nj − Z γ nν ∧ ζ nj γ ν ∧ ζ nj Z ns dB s ≥ Y nγ nν ∧ ζ nj + Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( s, Y ns , Z ns ) ds − Z γ nν ∧ ζ nj γ ν ∧ ζ nj Z ns dB s , P − a.s. , taking conditional expectation E h · (cid:12)(cid:12)(cid:12) F γ ν ∧ ζ nj i yields that P − a.s. Y γ ν ∧ ζ nj ≥ E " Y nγ nν ∧ ζ nj + Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( t, Y nt , Z nt ) dt (cid:12)(cid:12)(cid:12)(cid:12) F γ ν ∧ ζ nj = { γ ν ≥ ζ nj } E " Y nγ nν ∧ ζ nj + Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( t, Y nt , Z nt ) dt (cid:12)(cid:12)(cid:12)(cid:12) F ζ nj + { γ ν <ζ nj } E " Y nγ nν ∧ ζ nj + Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( t, Y nt , Z nt ) dt (cid:12)(cid:12)(cid:12)(cid:12) F γ ν := I n,j + I n,j . (A.33)As { γ ν ≥ ζ nj } ⊂ { γ nν ≥ ζ nj } , it holds P − a.s. that I n,j = E " { γ ν ≥ ζ nj } Y nγ nν ∧ ζ nj + { γ ν ≥ ζ nj } Z γ nν ∧ ζ nj γ ν ∧ ζ nj g ( t, Y nt , Z nt ) dt (cid:12)(cid:12)(cid:12)(cid:12) F ζ nj = E h { γ ν ≥ ζ nj } Y nζ nj (cid:12)(cid:12) F ζ nj i = { γ ν ≥ ζ nj } Y nζ nj . (A.34)Similar to (6.17), (H4), (H5), (1.5) and (1.6) imply that (cid:12)(cid:12) g ( t, Y nt , Z nt ) (cid:12)(cid:12) ≤ κ +(1+ κ ) h t +2 κ | Y nt | + κ | Z nt | α , dt ⊗ d P − a.s.It then follows from H¨older’s inequality that P − a.s. Z γ nν ∧ ζ nj γ ν ∧ ζ nj (cid:12)(cid:12) g ( s, Y ns , Z ns ) (cid:12)(cid:12) ds ≤ Z γ nν γ ν (cid:12)(cid:12) g ( s, Y ns , Z ns ) (cid:12)(cid:12) ds ≤ C Z γ nν γ ν (1+ h s + | Y ns | ) ds + κ ( γ nν − γ ν ) − α/ Z γ nν γ ν | Z ns | ds ! α/ (A.35) ≤ C Z T (1+ h s + | Y ns | ) ds + C α Z T | Z ns | ds ! α/ . (A.36)By Fubini’s Theorem and the uniform integrability of { Y nζ } ζ ∈T , E R T | Y ns | ds = R T E (cid:2) | Y ns | (cid:3) ds ≤ T sup s ∈ [0 ,T ] E (cid:2) | Y ns | (cid:3) < ∞ ,which together with Z n ∈ H ,α shows that the last term in (A.36) is integrable. As Z n ∈ ∩ p ∈ (0 , H ,p ⊂ H , showsthat (cid:8) ζ nj (cid:9) j ∈ N is stationary, it holds P − a.s. that lim j →∞ Y γ ν ∧ ζ nj = Y γ ν though we have not yet shown whether Y is acontinuous process. Letting j → ∞ in (A.33) and (A.34), we can deduce from the uniform integrability of { Y nζ } ζ ∈T and the conditional-expectation version of dominated convergence theorem that Y γ ν ≥ { γ ν = T } Y nT + lim j →∞ I n,j = { γ ν = T } ξ + { γ ν 0, with A nε := n E h R γ nν γ ν (cid:12)(cid:12) g ( s, Y ns , Z ns ) (cid:12)(cid:12) ds (cid:12)(cid:12)(cid:12) F γ ν i > ε o ∈ F γ ν , (A.35), H¨older’sinequality and (A.32) imply that P ( A nε ) ≤ ε E " A nε E (cid:20) Z γ nν γ ν (cid:12)(cid:12) g ( t, Y nt , Z nt ) (cid:12)(cid:12) dt (cid:12)(cid:12)(cid:12)(cid:12) F γ ν (cid:21) = 1 ε E " A nε Z γ nν γ ν (cid:12)(cid:12) g ( t, Y nt , Z nt ) (cid:12)(cid:12) dt ≤ C ε E Z γ nν γ ν (1+ h t + | Y nt | ) dt + κε E ( γ nν − γ ν ) − α/ Z γ nν γ ν | Z nt | dt ! α/ ≤ C ε E Z γ nν γ ν (cid:0) h t + | Y t | + | Y t | (cid:1) dt + κε n E h ( γ nν − γ ν ) (2 − α ) e α e α − α ) i o − α/ e α ( E "(cid:18) Z γ nν γ ν | Z nt | dt (cid:19) e α/ α/ e α ≤ C ε E Z γ nν γ ν (cid:0) h t + | Y t | + | Y t | (cid:1) dt + C α ε n E h ( γ nν − γ ν ) (2 − α ) e α e α − α ) i o − α/ e α ( E " 1+ sup s ∈ [0 ,T ] | Y t | e α + sup s ∈ [0 ,T ] | Y t | e α + Z T h t dt α/ e α . Since the Fubini’s Theorem and the uniform integrability of { Y ζ } ζ ∈T , { Y ζ } ζ ∈T show that E Z T (cid:0) h t + | Y t | + | Y t | (cid:1) dt ≤ Z T (1+ h t ) dt + Z T E (cid:2) | Y t | + | Y t | (cid:3) dt ≤ Z T (1+ h t ) dt + T sup t ∈ [0 ,T ] E [ | Y t | ]+ T sup t ∈ [0 ,T ] E [ | Y t | ] < ∞ , letting n → ∞ , we can deduce from the dominated convergence theorem and the bounded convergence theorem thatlim n →∞ P ( E (cid:20) Z γ nν γ ν (cid:12)(cid:12) g ( s, Y ns , Z ns ) (cid:12)(cid:12) ds (cid:12)(cid:12)(cid:12)(cid:12) F γ ν (cid:21) > ε ) = 0 , P − a.s.Thus, E hR γ nν γ ν g ( s, Y ns , Z ns ) ds (cid:12)(cid:12) F γ ν i converges to 0 in probability P . Up to a subsequence of { ( Y n , Z n ) } n ∈ N , one haslim n →∞ E "Z γ nν γ ν g ( s, Y ns , Z ns ) ds (cid:12)(cid:12) F γ ν = 0 , P − a.s. , which together with (A.37) − (A.39) leads to that Y γ ν ≥ { γ ν = T } ξ + { γ ν K. Bahlali, S. Hamad`ene, and B. Mezerdi , Backward stochastic differential equations with two reflectingbarriers and continuous with quadratic growth coefficient , Stochastic Process. Appl., 115 (2005), pp. 1107–1129.[2] E. Bayraktar and Y.-J. Huang , On the multidimensional controller-and-stopper games , SIAM J. ControlOptim., 51 (2013), pp. 1263–1297.[3] E. Bayraktar, I. Karatzas, and S. Yao , Optimal stopping for dynamic convex risk measures , Illinois J.Math., 54 (2010), pp. 1025–1067.[4] E. Bayraktar and M. Sˆırbu , Stochastic Perron’s method and verification without smoothness using viscositycomparison: obstacle problems and Dynkin games , Proc. Amer. Math. Soc., 142 (2014), pp. 1399–1412.[5] E. Bayraktar and S. Yao , Optimal stopping for non-linear expectations—Part I , Stochastic Process. Appl.,121 (2011), pp. 185–211.RBSDEs with Integrable Parameters 38[6] , Optimal stopping for non-linear expectations—Part II , Stochastic Process. Appl., 121 (2011), pp. 212–264.[7] , Quadratic reflected BSDEs with unbounded obstacles , Stochastic Process. Appl., 122 (2012), pp. 1155–1203.[8] , On the robust optimal stopping problem , SIAM J. Control Optim., 52 (2014), pp. 3135–3175.[9] J.-M. Bismut , Conjugate convex functions in optimal stochastic control , J. Math. Anal. Appl., 44 (1973),pp. 384–404.[10] P. Briand and R. Carmona , BSDEs with polynomial growth generators , J. Appl. Math. Stochastic Anal., 13(2000), pp. 207–238.[11] P. Briand, B. Delyon, Y. Hu, E. Pardoux, and L. Stoica , L p solutions of backward stochastic differentialequations , Stochastic Process. Appl., 108 (2003), pp. 109–129.[12] R. Buckdahn and J. Li , Probabilistic interpretation for systems of Isaacs equations with two reflecting barriers ,NoDEA Nonlinear Differential Equations Appl., 16 (2009), pp. 381–420.[13] Z. Chen, W. Tian, and G. Zhao , Optimal stopping rule meets ambiguity , in Real Options, Ambiguity, Riskand Insurance, vol. 5 of Studies in Probability, Optimization and Statistics, IOS Press, 2013, pp. 97 – 125.[14] J. Cvitani´c and I. Karatzas , Backward stochastic differential equations with reflection and Dynkin games ,Ann. Probab., 24 (1996), pp. 2024–2056.[15] J. Cvitani´c, I. Karatzas, and H. M. Soner , Backward stochastic differential equations with constraints onthe gains-process , Ann. Probab., 26 (1998), pp. 1522–1551.[16] C. Dellacherie and P.-A. Meyer , Probabilit´es et potentiel , Hermann, Paris, 1975. Chapitres I `a IV, ´Editionenti`erement refondue, Publications de l’Institut de Math´ematique de l’Universit´e de Strasbourg, No. XV, Actu-alit´es Scientifiques et Industrielles, No. 1372.[17] R. Dumitrescu, M.-C. Quenez, and A. Sulem , Generalized Dynkin games and doubly reflected BSDEs withjumps , (2013). Available at http://arxiv.org/abs/1310.2764 .[18] I. Ekren, N. Touzi, and J. Zhang , Optimal stopping under nonlinear expectation , Stochastic Process. Appl.,124 (2014), pp. 3277–3311.[19] B. El Asri, S. Hamad`ene, and H. Wang , L p -solutions for doubly reflected backward stochastic differentialequations , Stoch. Anal. Appl., 29 (2011), pp. 907–932.[20] N. El Karoui, C. Kapoudjian, E. Pardoux, S. Peng, and M. C. Quenez , Reflected solutions of backwardSDE’s, and related obstacle problems for PDE’s , Ann. Probab., 25 (1997), pp. 702–737.[21] N. El Karoui, S. Peng, and M. C. Quenez , Backward stochastic differential equations in finance , Math.Finance, 7 (1997), pp. 1–71.[22] E. H. Essaky and M. Hassani , Generalized BSDE with 2-reflecting barriers and stochastic quadratic growth. ,(2013). Available at http://arxiv.org/abs/0805.2979v3 .[23] E. H. Essaky, M. Hassani, and Y. Ouknine , Stochastic quadratic BSDE with two RCLL obstacles , (2011).Available at http://arxiv.org/abs/1103.5373 .[24] S. Hamad`ene , Mixed zero-sum stochastic differential game and American game options , SIAM J. ControlOptim., 45 (2006), pp. 496–518.[25] S. Hamad`ene and M. Hassani , BSDEs with two reflecting barriers: the general result , Probab. Theory Relat.Fields, 132 (2005), pp. 237–264. eferences S. Hamad`ene and I. Hdhiri , Backward stochastic differential equations with two distinct reflecting barriersand quadratic growth generator , J. Appl. Math. Stoch. Anal., (2006). Article ID 95818, 28.[27] S. Hamad`ene and J.-P. Lepeltier , Reflected BSDEs and mixed game problem , Stochastic Process. Appl., 85(2000), pp. 177–188.[28] S. Hamadene, J.-P. Lepeltier, and A. Matoussi , Double barrier backward SDEs with continuous coefficient ,in Backward stochastic differential equations (Paris, 1995–1996), vol. 364 of Pitman Res. Notes Math. Ser.,Longman, Harlow, 1997, pp. 161–175.[29] S. Hamad`ene, J.-P. Lepeltier, and Z. Wu , Infinite horizon reflected backward stochastic differential equa-tions and applications in mixed control and game problems , Probab. Math. Statist., 19 (1999), pp. 211–234.[30] S. Hamad`ene and A. Popier , L p -solutions for reflected backward stochastic differential equations , Stoch.Dyn., 12 (2012), pp. 1150016, 35.[31] S. Hamad`ene, E. Rotenstein, and A. Z˘alinescu , A generalized mixed zero-sum stochastic differential gameand double barrier reflected BSDEs with quadratic growth coefficient , An. S¸tiint¸. Univ. Al. I. Cuza Ia¸si. Mat.(N.S.), 55 (2009), pp. 419–444.[32] S. Hamad`ene and J. Zhang , The continuous time nonzero-sum Dynkin game problem and application ingame options , SIAM J. Control Optim., 48 (2009/10), pp. 3659–3669.[33] S. W. He, J. G. Wang, and J. A. Yan , Semimartingale theory and stochastic calculus , Kexue Chubanshe(Science Press), Beijing; CRC Press, Boca Raton, FL, 1992.[34] Y. Hu, J. Ma, S. Peng, and S. Yao , Representation theorems for quadratic F -consistent nonlinear expecta-tions , Stochastic Process. Appl., 118 (2008), pp. 1518–1551.[35] I. Karatzas and W. D. Sudderth , The controller-and-stopper game for a linear diffusion , Ann. Probab., 29(2001), pp. 1111–1127.[36] I. Karatzas and H. Wang , Connections between bounded variation control and Dynkin games , in OptimalControl and Partial Differential Equations (Volume in honor of A. Bensoussan), J.L. Menaldi, E. Rofman, andA. Sulem, eds., IOS Press, Amsterdam (2001), pp. 363–373.[37] I. Karatzas and I.-M. Zamfirescu , Martingale approach to stochastic differential games of control andstopping , Ann. Probab., 36 (2008), pp. 1495–1527.[38] T. Klimsiak , BSDEs with monotone generator and two irregular reflecting barriers , Bull. Sci. Math., 137 (2013),pp. 268–321.[39] J.-P. Lepeltier and J. San Mart´ın , Backward SDEs with two barriers and continuous coefficient: anexistence result , J. Appl. Probab., 41 (2004), pp. 162–175.[40] J. Ma and S. Yao , On quadratic g -evaluations/expectations and related analysis , Stoch. Anal. Appl., 28 (2010),pp. 711–734.[41] M. Nutz and J. Zhang , Optimal stopping under adverse nonlinear expectation and related games , to appearin Ann. Appl. Probab., (2014). Available at http://arxiv.org/abs/1212.2140 .[42] ´E. Pardoux and S. G. Peng , Adapted solution of a backward stochastic differential equation , Systems ControlLett., 14 (1990), pp. 55–61.[43] S. Peng , Backward SDE and related g -expectation , in Backward stochastic differential equations (Paris, 1995–1996), vol. 364 of Pitman Res. Notes Math. Ser., Longman, Harlow, 1997, pp. 141–159.[44] S. Peng , Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type , Probab.Theory Related Fields, 113 (1999), pp. 473–499.RBSDEs with Integrable Parameters 40[45] , Dynamical evaluations , C. R. Math. Acad. Sci. Paris, 339 (2004), pp. 585–589.[46] S. Peng , Nonlinear expectations, nonlinear evaluations and risk measures , vol. 1856 of Lecture Notes in Math.,Springer, Berlin, 2004.[47] E. Rosazza Gianin , Risk measures via g -expectations , Insurance Math. Econom., 39 (2006), pp. 19–34.[48] N. Touzi and N. Vieille , Continuous-time Dynkin games with mixed strategies , SIAM J. Control Optim., 41(2002), pp. 1073–1088 (electronic).[49] M. Xu , Reflected backward SDEs with two barriers under monotonicity and general increasing conditions , J.Theoret. Probab., 20 (2007), pp. 1005–1039.[50] } { ν } { ν } (cid:16) { s ≤ ν } e Z s + { ν n (cid:9) ∧ τ ∈ T ν,τ , and integrate by parts theprocess (cid:8) e λ + ( τ n ∧ t ) e Y + τ n ∧ t (cid:9) t ∈ [0 ,T ] to obtain that P − a.s. e λ + ( τ n ∧ t ) e Y + τ n ∧ t = e λ + τ n Y + τ n + Z τ n τ n ∧ t {Y s > } { s>ν } e λ + s ∆ g s ds + Z τ n τ n ∧ t {Y s > } { s>ν } e λ + s ( dV s − dV s ) − Z τ n τ n ∧ t e λ + s d L s − λ + Z τ n τ n ∧ t e λ + s e Y + s ds − Z τ n τ n ∧ t { e Y s > } e λ + s (cid:16) { s ≤ ν } e Z s + { s>ν } Z s (cid:17) dB s , t ∈ [0 , T ] . (6.5)Here we used the fact that e Y ν ∨ ( τ ∧ t ) = E [ Y ν |F ν ]+ Y ν ∨ ( τ ∧ t ) −Y ν = Y ν ∨ ( τ ∧ t ) , ∀ t ∈ [0 , T ], i.e. e Y t = Y t , ∀ t ∈ [ ν, τ ] . (6.6)Since g satisfies (H2) and (H5), it holds ds ⊗ d P − a.s. on [[ ν, τ ]] that {Y s ( ω ) > } (cid:16) g (cid:0) s, ω, Y s ( ω ) , Z s ( ω ) (cid:1) − g (cid:0) s, ω, Y s ( ω ) , Z s ( ω ) (cid:1)(cid:17) ≤ {Y s ( ω ) > } λ Y + s ( ω ) ≤ λ + Y + s ( ω ) , (6.7)and that (cid:12)(cid:12) g (cid:0) s, ω, Y s ( ω ) , Z s ( ω ) (cid:1) − g (cid:0) s, ω, Y s ( ω ) , Z s ( ω ) (cid:1) (cid:12)(cid:12) ≤ κ (cid:0) h s ( ω )+ | Y s ( ω ) | + | Z s ( ω ) | (cid:1) α + κ (cid:0) h s ( ω )+ | Y s ( ω ) | + | Z s ( ω ) | (cid:1) α . Plugging them back into (6.5) and taking t = ν ∨ t there, we see from (6.2) and (3.3) that P − a.s. e λ + ( ν ∨ ( τ n ∧ t )) Y + ν ∨ ( τ n ∧ t ) ≤ e λ + τ n Y + τ n +2 κe λ + T η − Z τ n ν ∨ ( τ n ∧ t ) {Y s > } e λ + s Z s dB s , t ∈ [0 , T ] , (6.8)where η := R τν (cid:0) h t + | Y t | + | Z t | + | Z t | (cid:1) α dt .Let t ∈ [0 , T ]. Taking conditional expectation E (cid:2) · |F ν ∨ ( τ n ∧ t ) (cid:3) in (6.8) yields that e λ + ( ν ∨ ( τ n ∧ t )) Y + ν ∨ ( τ n ∧ t ) ≤ E h e λ + τ n Y + τ n +2 κe λ + T η (cid:12)(cid:12) F ν ∨ ( τ n ∧ t ) i , P − a.s., and it follows that { ν ≤ t ≤ τ n } e λ + t Y + t ≤ { ν ≤ t ≤ τ n } E h e λ + τ n Y + τ n +2 κe λ + T η (cid:12)(cid:12) F t i , P − a.s. (6.9)By (1.5) and H¨older’s inequality, η ≤ Z τν (cid:0) h t + | Y t | (cid:1) α dt + X i =1 Z τν | Z it | α dt ≤ T − α (cid:18)Z τν (cid:0) h t + | Y t | (cid:1) dt (cid:19) α + T − α/ X i =1 (cid:18)Z τν | Z it | dt (cid:19) α/ , P − a.s.Fubini’s Theorem and the uniform integrability of (cid:8) Y γ (cid:9) γ ∈T ν,τ imply that E Z τν | Y t | dt = E Z τν | Y ν ∨ ( τ ∧ t ) | dt ≤ E Z T (cid:12)(cid:12) Y ν ∨ ( τ ∧ t ) (cid:12)(cid:12) dt = Z T E h(cid:12)(cid:12) Y ν ∨ ( τ ∧ t ) (cid:12)(cid:12)i dt ≤ T sup γ ∈T ν,τ E (cid:2) | Y γ | (cid:3) < ∞ . As q = p/α , applying (1.5) and H¨older’s inequality again yields that E [ η q ] ≤ q − T (1 − α ) q (cid:26) E Z τν (cid:0) h t + | Y t | (cid:1) dt (cid:27) p + 3 q − T (1 − α/ q X i =1 E "(cid:18)Z τν | Z it | dt (cid:19) p/ < ∞ . (6.10)We see from E h(cid:0) R τν | Z it | dt (cid:1) p/ i < ∞ , i = 1 , P − a.s. ω ∈ Ω, τ ( ω ) = τ N ω ( ω ) for some N ω ∈ N . For any t ∈ [0 , T ], since the uniform integrability of (cid:8) Y iγ (cid:9) γ ∈T ν,τ , i = 1 , (cid:8) e λ + γ Y + γ (cid:9) γ ∈T ν,τ , letting n → ∞ in(6.9) yields that P − a.s. { ν ≤ t ≤ τ } Y + t ≤ { ν ≤ t ≤ τ } e λ + t Y + t ≤ { ν ≤ t ≤ τ } κe λ + T E h e λ + τ ( Y τ − Y τ ) + + η |F t i = { ν ≤ t ≤ τ } κe λ + T E [ η |F t ] . RBSDEs with Integrable Parameters 12Using the continuity of Y + and that of process (cid:8) E [ η |F t ] (cid:9) t ∈ [0 ,T ] , one gets P (cid:8) Y + t ≤ κe λ + T E [ η |F t ] , ∀ t ∈ [ ν, τ ] (cid:9) = 1.Then Doob’s martingale inequality and (6.10) lead to that E " sup t ∈ [ ν,τ ] (cid:0) Y + t (cid:1) q ≤ (2 κ ) q e qλ + T E " sup t ∈ [0 ,T ] ( E [ η |F t ]) q ≤ (cid:18) qq − (cid:19) q (2 κ ) q e qλ + T E [ η q ] < ∞ . (6.11) (2) Next, we show that E " sup t ∈ [ ν,τ ] (cid:0) Y + t (cid:1) q = 0 indeed; then the conclusion easily follows .According to (6.4), applying Lemma 3.1 yields that P − a.s. (cid:16) e Y + t (cid:17) q = (cid:16)(cid:0) E [ Y ν ] (cid:1) + (cid:17) q − q Z t { e Y s > } { ν } { ν } (cid:16) e Y + s (cid:17) q − (cid:16) { s ≤ ν } e Z s + { ν } (cid:16) e Y + s (cid:17) q − (cid:16) { s ≤ ν } (cid:12)(cid:12) e Z s (cid:12)(cid:12) + { ν
ℓ (cid:27) ∧ τ, ℓ ∈ N (6.24)are stopping times with ν ≤ τ ℓ ≤ τ , i.e. τ ℓ ∈ T ν,τ . As E h L + ∗ + R T h t dt i < ∞ and P n sup t ∈ [ ν,τ ] (cid:0) ( Y t ) − + Y + t (cid:1) < ∞ o = 1, itholds for any ω ∈ Ω except on a P − null set N that τ ( ω ) = τ N ω ( ω ) for some N ω ∈ N . Now, let us fix ℓ ∈ N for this part as well as next two parts. Let N := ∪ n ∈ N { ω ∈ Ω : the path Y n · ( ω ) is not continuous } (which is clearly a P − null set) and set A ℓ := { ν < τ ℓ } ∩ N c ∈ F ν ∧ τ ℓ ⊂ F ν . Given ω ∈ A ℓ , for any n ∈ N we candeduce from (6.24) that | Y nt ( ω ) | ≤ ℓ , ∀ t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:1) , and the continuity of each Y n implies that | Y nt ( ω ) | ≤ ℓ , ∀ t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:3) . Then it follows from the monotonicity of { Y n } n ∈ N thatsup n ∈ N (cid:12)(cid:12) Y nt ( ω ) (cid:12)(cid:12) ≤ (cid:12)(cid:12) Y t ( ω ) (cid:12)(cid:12) ∨ (cid:12)(cid:12) Y t ( ω ) (cid:12)(cid:12) ≤ ℓ, ∀ t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:3) , ∀ ω ∈ A ℓ . (6.25)Let n ∈ N . As E (cid:2) | A ℓ Y nν | (cid:3) ≤ ℓ , Corollary 3.1 shows that there exists a unique e Z ℓ,n ∈ ∩ p ∈ (0 , H ,p such that P (cid:8) E (cid:2) A ℓ Y nν |F t (cid:3) = E (cid:2) A ℓ Y nν (cid:3) + R t e Z ℓ,ns dB s , ∀ t ∈ [0 , T ] (cid:9) = 1. Similar to (6.3), we can deduce from (6.23) that P − a.s. Y ℓ,nt := E (cid:2) A ℓ Y nν |F ν ∧ t (cid:3) + Y nν ∨ ( τ ℓ ∧ t ) − Y nν = E (cid:2) A ℓ Y nν (cid:3) − Z t { ν ν } ∈ F t for any t ∈ [0 , T ], (cid:8) A ℓ ∩{ t ≥ ν } (cid:9) t ∈ [0 ,T ] is an F − adapted c`adl`ag process.Then Y ν ∨ ( τ ℓ ∧ t ) − Y ν = A ℓ ∩{ t ≥ ν } ( Y τ ℓ ∧ t − Y ν ) = A ℓ ∩{ t ≥ ν } ( Y τ ℓ ∧ t − Y ν ∧ t ) , t ∈ [0 , T ] (6.31)is an F − optional process and it follows that e K ℓt := Y ν − Y ν ∨ ( τ ℓ ∧ t ) − Z t { ν ν ( ω ).So ω ∈ A n ω ∩ e N cn ω = (cid:8) ω ′ ∈ Ω : ν ( ω ′ ) < τ n ω ( ω ′ ) (cid:9) ∩ N c ∩ e N cn ω and Y t ( ω ) ≥ L t ( ω ) holds for any t ∈ (cid:2) ν ( ω ) , τ n ω ( ω ) (cid:1) = (cid:2) ν ( ω ) , τ ( ω ) (cid:1) . In summary, it holds for P − a.s. ω ∈ { ν < τ } that Y t ( ω ) ≥ L t ( ω ) for any t ∈ (cid:2) ν ( ω ) , τ ( ω ) (cid:1) , which togetherwith P { Y τ ≥ L τ } = 1 shows that for any ω ∈ { ν < τ } except on a P − null set b N Y t ( ω ) ≥ L t ( ω ) , ∀ t ∈ (cid:2) ν ( ω ) , τ ( ω ) (cid:3) . (6.35)Now we freeze the parameter ℓ again and let ω ∈ A ℓ ∩ b N c . As A ℓ ⊂ { ν < τ } ∩ N c , we see from (6.35) that Y t ( ω ) ≥ L t ( ω ) for any t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:3) . Since continuous function ( Y nt − L t ) − ( ω ), t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:3) is decreasing to( Y t − L t ) − ( ω ) = 0, t ∈ (cid:2) ν ( ω ) , τ ℓ ( ω ) (cid:3) when n → ∞ , Dini’s theorem shows thatlim n →∞ ↓ sup t ∈ [ ν ( ω ) ,τ ℓ ( ω )] ( Y nt − L t ) − ( ω ) = 0 . As A ℓ sup t ∈ [ ν,τ ℓ ] ( Y nt − L t ) − ≤ A ℓ sup t ∈ [ ν,τ ℓ ] (cid:0) L + t + | Y nt | (cid:1) ≤ ℓ , ∀ n ∈ N by (6.24), (6.27) and (6.25), an application of thebounded convergence theorem yields thatlim n →∞ ↓ E " A ℓ sup t ∈ [ ν,τ ℓ ] (cid:0) ( Y nt − L t ) − (cid:1) = 0 . (6.36)Similar to the arguments used in [20] (see pages 21-22 therein), we can deduce from (6.36) that n Y ℓ,n o n ∈ N is a Cauchy sequence in S and (cid:8) { ν
X s ′ (cid:9)(cid:19) , which is also a P − null set. Given ω ∈ e N c and t, t ′ ∈ [0 , T ] with t < t ′ ,let { s n } n ∈ N ⊂ D ∩ ( t, t ′ ) with lim n →∞ ↓ s n = t and let { s ′ n } n ∈ N ⊂ D ∩ (( t ′ , T ) ∪ { T } ) with lim n →∞ ↓ s ′ n = t ′ . We can deducefrom (A.18) that X t ( ω ) = lim n →∞ X s n ( ω ) ≤ lim n →∞ X s ′ n ( ω ) = X t ′ ( ω ). Therefore, X is an increasing process. (cid:3) Proof of (6.34) : (1) e K ℓ e γ (cid:9) ∈ F T is strictly largerthan 0, it would follow that E h A e K ℓγ i > E h A e K ℓ e γ i . However, we know from part (2) and (A.23) that E h A e K ℓγ i = lim n →∞ E (cid:2) A K nτ ℓ ∧ γ (cid:3) ≤ lim n →∞ E (cid:2) A K nτ ℓ ∧ e γ (cid:3) = E h A e K ℓ e γ i . An contradiction appears. Therefore, e K ℓγ ≤ e K ℓ e γ , P − a.s. Then Lemma A.3 shows that e K ℓ is an increasing process. (cid:3) . Appendix Proof of (6.37) : Set a := 2( λ + + κ ) and Fix m, n ∈ N with m > n . We define processes Ξ m,nt := Ξ mt − Ξ nt , t ∈ [0 , T ] forΞ = Y, Y ℓ , Z . Similar to (A.10), we can deduce from (6.26) that P − a.s. e a t (cid:12)(cid:12) Y ℓ,m,nt (cid:12)(cid:12) + Z τ ℓ t e a s (cid:0) a | Y ℓ,m,ns | + | Z m,ns | (cid:1) ds = e a τ ℓ (cid:12)(cid:12) Y ℓ,m,nτ ℓ (cid:12)(cid:12) +2 Z τ ℓ t e a s Y ℓ,m,ns (cid:0) g ( s, Y ℓ,ms , Z ms ) − g ( s, Y ℓ,ns , Z ns ) (cid:1) ds +2 Z τ ℓ t e a s Y ℓ,m,ns dK ms − Z τ ℓ t e a s Y ℓ,m,ns dK ns − Z τ ℓ t e a s Y ℓ,m,ns Z m,ns dB s , ∀ t ∈ [ ν, τ ℓ ] . (A.24)By (H1) and (H2), it holds ds ⊗ d P − a.s. that Y ℓ,m,ns (cid:0) g ( s, Y ℓ,ms , Z ms ) − g ( s, Y ℓ,ns , Z ns ) (cid:1) = Y ℓ,m,ns (cid:0) g ( s, Y ℓ,ms , Z ms ) − g ( s, Y ℓ,ns , Z ms ) (cid:1) + Y ℓ,m,ns (cid:0) g ( s, Y ℓ,ns , Z ms ) − g ( s, Y ℓ,ns , Z ns ) (cid:1) ≤ λ | Y ℓ,m,ns | + κ | Y ℓ,m,ns || Z m,ns | ≤ ( λ + + κ ) | Y ℓ,m,ns | + 14 | Z m,ns | . (A.25)Also, one can deduce from the definition of process K m that Z τ ℓ t e a s Y ℓ,m,ns dK ms = A ℓ Z τ ℓ t e a s Y m,ns dK ms = A ℓ Z τ ℓ t { Y ms