Anticipated Backward SDEs with Jumps and quadratic-exponential growth drivers
aa r X i v : . [ q -f i n . M F ] J u l Anticipated Backward SDEs with Jumps andquadratic-exponential growth drivers ∗ Masaaki Fujii † & Akihiko Takahashi ‡ This version: July 9, 2018
Abstract
In this paper, we study a class of Anticipated Backward Stochastic Differential Equa-tions (ABSDE) with jumps. The solution of the ABSDE is a triple (
Y, Z, ψ ) where Y is a semimartingale, and ( Z, ψ ) are the diffusion and jump coefficients. We allow thedriver of the ABSDE to have linear growth on the uniform norm of Y ’s future paths, aswell as quadratic and exponential growth on the spot values of ( Z, ψ ), respectively. Theexistence of the unique solution is proved for Markovian and non-Markovian settings withdifferent structural assumptions on the driver. In the former case, some regularities on(
Z, ψ ) with respect to the forward process are also obtained.
Keywords : predictive mean-field type, time-advanced, quadratic growth, future path de-pendent driver, ABSDE
As a powerful probabilistic tool to analyze general control problems, non-linear partial dif-ferential equations as well as many newly appeared financial problems, backward stochasticdifferential equations (BSDEs) have attracted strong research interests since the pioneeringworks of Bismut (1973) [6] and Pardoux & Peng (1990) [30].Recently, Peng & Yang (2009) [32] introduced a new class, so-called anticipated (or time-advanced) BSDEs, where the drivers are dependent on the conditional expectations of thefuture paths of the solutions. They originally appeared as adjoint processes when dealingwith optimal control problems on delayed systems. Since then various generalizations havebeen studied by many authors: Oksendal et al. (2011) [28] dealt with a control problem ondelayed systems with jumps, Pamen (2015) [29] a stochastic differential game with delay, Xu(2011) [37], Yang & Elliott (2013) [36] studied some generalizations and conditions for thecomparison principle to hold. Jeanblac et al. (2016) [18] studied anticipated BSDEs undera setting of progressive enlargement of filtration. The importance of anticipated BSDEs forfinancial applications is likely to grow in the coming years because of the set of new regulations(in particular, the margin rule on the independent amount). They require the financial firmsto adjust the collateral (or capital) amount based on the expected future maximum loss, ∗ Accepted for publication in Stochastics and Dynamics. All the contents expressed in this research aresolely those of the author and do not represent any views or opinions of any institutions. † Quantitative Finance Course, Graduate School of Economics, The University of Tokyo. [email protected] ‡ Quantitative Finance Course, Graduate School of Economics, The University of Tokyo. [email protected] Y t = ξ + Z Tt E F r f (cid:16) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:17) dr − Z Tt Z r dW r − Z Tt Z E ψ r ( e ) e µ ( dr, de )where the driver f ( t, · ) is allowed to have linear growth in sup v ∈ [ t,T ] | Y v | , quadratic in Z t and exponential growth in the jump coefficients ψ t . This will be the necessary first steptoward the understanding of the general problems involving non-Lipschitz generators withanticipated components and its applications to the various problems mentioned above.For the (non-anticipated) BSDEs with quadratic growth drivers, the first breakthroughwas made by Kobylanski (2000) [23] and then followed by many researchers for its generaliza-tion and applications. In the presence of jumps, in particular, they were studied by Becherer(2006) [4], Morlais (2010) [25], Ngoupeyou (2010) [27], Cohen & Elliott (2015) [7], Kazi-Taniet al. (2015) [21], Antonelli & Mancini (2016) [1], El Karoui et al. (2016) [10] and Fujii &Takahashi (2017) [16] with varying generality. An important common tool is the so called A Γ -condition [2, 35] necessary to make the comparison principle to hold in the presence ofjumps, which is then used to create a monotone sequence of regularized BSDEs.Although A Γ -condition is known to hold for the setting of exponential utility optimiza-tion [25], it is rather restrictive, and in fact, stronger than the local Lipschitz continuity.Furthermore, in the anticipated settings, the comparison principle does not hold generallyeven when the A Γ -condition is satisfied. Although the fixed point approach [7, 21] does notrely on the comparison principle at least for small terminal values, it requires the second-orderdifferentiability of the driver which is difficult to establish in the presence of the general pathdependence.In this paper, we firstly extend the quadratic-exponential structure condition of [3, 10] toallow the dependence on Y ’s future paths, and then derive the universal bounds on ( Y, Z, ψ )under a general bounded terminal condition. This bounds are then used to prove a stabilityresult under a general non-Markovian setting. Under the Markovian setting, this stability re-sult leads to the compactness result for the deterministic map defined by u ( t, x ) = Y t,xt , whichthen allows us to prove the existence of the solution in the absence of the A Γ -condition. It alsoprovides some regularities on ( Z, ψ ) with respect to the forward process. As a by product, itmakes the A Γ -condition unnecessary for the existence, uniqueness and Malliavin’s differentia-bility of quadratic-exponential growth (non-anticipated) BSDEs under the Markovian settingstudied in Section 6 of [16]. For a non-Markovian setting, we reintroduce the A Γ -conditionand make use of our previous result in [16] to prove the existence of the unique solution. We2lso give a sufficient condition for the comparison principle to hold. Let us first state the general setting to be used throughout the paper.
T > W , F W , P W ) is the usual canonical space for a d -dimensionalBrownian motion equipped with the Wiener measure P W . We also denote (Ω µ , F µ , P µ ) as aproduct of canonical spaces Ω µ := Ω µ × · · · × Ω kµ , F µ := F µ × · · · × F kµ and P µ × · · · × P kµ with some constant k ∈ N , on which each µ i is a Poisson measure with a compensator ν i ( de ) dt . Here, ν i ( de ) is a σ -finite measure on R = R \{ } satisfying R R | e | ν i ( de ) < ∞ .For notational simplicity, we write ( E, E ) := ( R k , B ( R ) k ). Throughout the paper, we workon the filtered probability space (Ω , F , F = ( F t ) t ∈ [0 ,T ] , P ), where the space (Ω , F , P ) is theproduct of the canonical spaces (Ω W × Ω µ , F W × F µ , P W × P µ ), and that the filtration F =( F t ) t ∈ [0 ,T ] is the canonical filtration completed for P and satisfying the usual conditions. Inthis construction, ( W, µ , · · · , µ k ) are independent. We use a vector notation µ ( ω, dt, de ) :=( µ ( ω, dt, de ) , · · · , µ k ( ω, dt, de k )) and denote the compensated Poisson measure as e µ := µ − ν . F -predictable σ -field on Ω × [0 , T ] is denoted by P . It is well-known that the weak propertyof predictable representation holds in this setup (see for example [17] chapter XIII). We denote a generic constant by C which may change line by line. We write C = C ( a, b, c, · · · )when the constant depends only on the parameters ( a, b, c, · · · ). T ts denotes the set of F -stopping times τ : Ω → [ s, t ]. We denote the conditional expectation with respect to F t by E F t [ · ] or E [ ·|F t ]. Under a probability measure Q different from P , we explicitly denoteit, for example, by E Q F t [ · ]. Sometimes we use the abbreviations || x || [ s,t ] := sup v ∈ [ s,t ] | x v | andΘ v := ( Y v , Z v , ψ v ).We introduce the following spaces. p ∈ N is assumed to be p ≥ • D [ s, t ] is the set of real valued c`adl`ag functions ( q v ) v ∈ [ s,t ] . • S p [ s, t ] is the set of real (or vector) valued c`adl`ag F -adapted processes ( X v ) v ∈ [ s,t ] such that || X || S p [ s,t ] := E (cid:2) sup v ∈ [ s,t ] | X v | p (cid:3) p < ∞ . • S ∞ [ s, t ] is the set of real (or vector) valued c`adl`ag F -adapted processes ( X v ) v ∈ [ s,t ] which areessentially bounded, i.e. || X || S ∞ [ s,t ] := (cid:12)(cid:12)(cid:12)(cid:12) sup v ∈ [ s,t ] | X v | (cid:12)(cid:12)(cid:12)(cid:12) ∞ < ∞ . Here, || x || ∞ := inf (cid:8) c ∈ R ; P ( {| x | ≤ c } ) = 1 (cid:9) . • H p [ s, t ] is the set of progressively measurable real (or vector) valued processes ( Z v ) v ∈ [ s,t ] such that || Z || H p [ s,t ] := E h(cid:16)Z ts | Z v | dv (cid:17) p i p < ∞ . • L ( E, ν ) (or simply L ( ν )) is the set of k -dimensional vector-valued functions ψ = ( ψ i ) ≤ i ≤ k for which the each component ψ i : R → R is B ( R )-measurable and || ψ || L ( E,ν ) := (cid:16) k X i =1 Z R | ψ i ( e ) | ν i ( de ) (cid:17) < ∞ . L ∞ ( E, ν ) (or simply L ∞ ( ν )) is the set of functions ψ = ( ψ i ) ≤ i ≤ k for which the eachcomponent ψ i : R → R is B ( R )-measurable and bounded ν i ( de )-a.e. with the standardessential supremum norm. • J p [ s, t ] is the set of functions ψ = ( ψ i ) ≤ i ≤ k with ψ i : Ω × [ s, t ] × R → R being P ⊗ B ( R )-measurable (or we simply say ψ is P ⊗ E -measurable) and satisfy || ψ || J p [ s,t ] := E h(cid:16) k X i =1 Z ts Z R | ψ iv ( e ) | ν i ( de ) dv (cid:17) p i p < ∞ . • We denote K p [ s, t ] = S p [ s, t ] × H p [ s, t ] × J p [ s, t ] with the norm || ( Y, Z, ψ ) || K p [ s,t ] := || Y || S p [ s,t ] + || Z || H p [ s,t ] + || ψ || J p [ s,t ] . For notational simplicity, hereafter we write Z ts Z E ψ r ( e ) e µ ( dr, de ) := k X i =1 Z ts Z R ψ ir ( e ) e µ i ( dr, de )and use similar abbreviations for the integrations with respect to ( µ, ν ) = ( µ i , ν i ) ≤ i ≤ k . • J ∞ [ s, t ] is the set of P ⊗ E -measurable functions ψ = ( ψ i ) ≤ i ≤ k essentially bounded withrespect to the measure d P ⊗ ν ( de ) ⊗ dt i.e. || ψ || J ∞ [ s,t ] := (cid:12)(cid:12)(cid:12)(cid:12) ess sup v ∈ [ s,t ] || ψ v || L ∞ ( E,ν ) (cid:12)(cid:12)(cid:12)(cid:12) ∞ < ∞ . • H BMO [ s, t ] is the set of real (or vector) valued progressively measurable processes ( Z v ) v ∈ [ s,t ] such that || Z || H BMO [ s,t ] := sup τ ∈T ts (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F τ hZ tτ | Z r | dr i(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ < ∞ . • J B [ s, t ] is the set of P ⊗ E -measurable functions such that || ψ || J B [ s,t ] := sup τ ∈T ts (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F τ hZ tτ Z E | ψ r ( e ) | ν ( de ) dr i(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ < ∞ . • J BMO [ s, t ] is the set of P ⊗ E -measurable functions such that || ψ || J BMO [ s,t ] := sup τ ∈T ts (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F τ hZ tτ Z E | ψ r ( e ) | ν ( de ) dr i + (∆ M τ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ < ∞ , where ∆ M τ := R E ψ τ ( e ) µ ( { τ } , de ). See Section 2.3 of [16] and references therein for thedetails of BMO-martingales with jumps. We frequently omit [ s, t ] if it is obvious from thecontext. J ∞ , J B , J BMO
By a simple adaptation of Corollary 1 in [26], we get the next Lemma.
Lemma 2.1.
Let ψ be in J [0 , T ] , and define a square-integrable pure jump martingale M t ) t ∈ [0 ,T ] by M t := R t R E ψ s ( e ) e µ ( ds, de ) . The jump ∆ M at time t is given by ∆ M t := M t − M t − = Z E ψ t ( e ) µ ( { t } , de ) . (2.1) Then the following two conditions are equivalent:(1) || ψ || J ∞ [0 ,T ] is finite.(2) sup τ ∈T T || ∆ M τ || ∞ is finite.Moreover, the above two quantities coincide when they exist i.e. || ψ || J ∞ = sup τ ∈T T || ∆ M τ || ∞ . Proof. (1) ⇒ (2). Let assume ψ ∈ J [0 , T ] ∩ J ∞ [0 , T ]. By construction, only the jump timesof the Poisson measure µ ( dt, de ) contributes | ∆ M | . Since the density of µ ( dt, de ) is given by ν ( de ) ⊗ dt , it is obvious to see | ∆ M τ | ≤ || ψ || J ∞ [0 ,T ] a.s. for every stopping time τ ∈ T T andhence sup τ ∈T T || ∆ M τ || ∞ ≤ || ψ || J ∞ [0 ,T ] .(2) ⇒ (1). Assume conversely, C := sup τ ∈T T || ∆ M τ || ∞ < ∞ . By (2.1), one sees | ψ τ ( e ) | ≤ C a . s . (2.2)for every pair ( τ, e ) of the jump time ( τ ) and its associated mark ( e ) of the random measure µ ( dt, de ). Let us define a new process ψ ∈ J [0 , T ] ∩ J ∞ [0 , T ] by the next truncation:¯ ψ t ( e ) := ψ t ( e ) {| ψ t ( e ) |≤ C } ∀ ( ω, t, e ) ∈ Ω × [0 , T ] × E. Notice that ψ and ψ are equal a.s. on every jump time and its associated mark of µ . As aconsequence, one has 0 = E hZ T Z E | ψ t ( e ) − ψ t ( e ) | µ ( dt, de ) i = E hZ T Z E | ψ t ( e ) − ψ t ( e ) | ν ( de ) dt i . This means ψ = ψ in J [0 , T ] and hence, in particular, d P ⊗ ν ( de ) ⊗ dt -a.e. Therefore, || ψ || J ∞ [0 ,T ] = || ψ || J ∞ [0 ,T ] ≤ C and (1) holds. This establishes the equivalent of (1) and (2).Combining the two results, one can conclude || ψ || J ∞ [0 ,T ] = sup τ ∈T T || ∆ M τ || ∞ . Remark 2.1.
Note that ψ must be a predictable process. This fact makes the constraint onlyat the jump points in (2.2) be translated into the whole domain of ψ . Using the above result, one obtains the following relation among the different jump norms.
Lemma 2.2.
The following two conditions are equivalent:(1) ψ ∈ J B [0 , T ] ∩ J ∞ [0 , T ] .(2) ψ ∈ J BMO [0 , T ] .Moreover, the following inequality holds: ( || ψ || J B [0 ,T ] ∨ || ψ || J ∞ [0 ,T ] ) ≤ || ψ || J BMO [0 ,T ] ≤ || ψ || J B [0 ,T ] + || ψ || J ∞ [0 ,T ] . Proof. (1) ⇒ (2). Let ψ be in J B [0 , T ] ∩ J ∞ [0 , T ]. Since J B ⊂ J , Lemma 2.1 implies that || ψ || J ∞ [0 ,T ] = sup τ ∈T T || ∆ M τ || ∞ where ∆ M τ := R E ψ τ ( e ) µ ( { τ } , de ) as defined before. One5hen obtains || ψ || J BMO [0 ,T ] = sup τ ∈T T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F τ hZ Tτ Z E | ψ r ( e ) | ν ( de ) dr i + (∆ M τ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ ≤ || ψ || J B [0 ,T ] + sup τ ∈T T || ∆ M τ || ∞ = || ψ || J B [0 ,T ] + || ψ || J ∞ [0 ,T ] . Thus (2) holds.(2) ⇒ (1). On the other hand let ψ ∈ J BMO [0 , T ]. By definition of J BMO -norm, one hassup τ ∈T T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F τ hZ Tτ | ψ r ( e ) | ν ( de ) dr i(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ ∨ sup τ ∈T T || ∆ M τ || ∞ ≤ || ψ || J BMO . Since J BMO ⊂ J B , Lemma 2.1 once again implies || ψ || J B [0 ,T ] ∨ || ψ || J ∞ [0 ,T ] ≤ || ψ || J BMO [0 ,T ] . The last claim direct follows from the above two inequalities.
Remark 2.2.
When ψ is given as a part of BSDE solution ( Y, Z, ψ ) as in (3.1), ψ can bedefined only up d P ⊗ ν ( de ) ⊗ dt -a.e. Thus, if one has a ψ ∈ J ∞ , one can freely work on itsversion ( e ψ t ( ω, e )) ( ω,t,e ) ∈ Ω × [0 ,T ] × E which is everywhere bounded (as in ¯ ψ used in the proof ofLemma 2.1). This fact is being used in some existing literature. In this section, we consider various a priori estimates regarding anticipated quadratic-exponentialgrowth BSDEs with jumps in a general non-Markovian setup. We are interested in the fol-lowing ABSDE for t ∈ [0 , T ]: Y t = ξ + Z Tt E F r f (cid:16) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:17) dr − Z Tt Z r dW r − Z Tt Z E ψ r ( e ) e µ ( dr, de ) , (3.1)where f : Ω × [0 , T ] × D [0 , T ] × R × R × d × L ( E, ν ) → R , and ξ is an F T -measurable randomvariable. Assumption 3.1. (i) The driver f is a map such that for every ( y, z, ψ ) ∈ R × R × d × L ( E, ν ) and any c`adl`ag F -adapted process ( Y v ) v ∈ [0 ,T ] , the process (cid:0) E F t f ( t, ( Y v ) v ∈ [ t,T ] , y, z, ψ ) , t ∈ [0 , T ] (cid:1) is F -progressively measurable, and the map ( y, z, ψ ) → f ( · , y, z, ψ ) is continuous.(ii) For every ( q, y, z, ψ ) ∈ D [0 , T ] × R × R × d × L ( E, ν ) , there exist constants β, δ ≥ , γ > and a positive progressively measurable process ( l v , v ∈ [0 , T ]) such that − l t − δ (cid:0) sup v ∈ [ t,T ] | q v | (cid:1) − β | y | − γ | z | − Z E j γ ( − ψ ( e )) ν ( de ) ≤ f (cid:0) t, ( q v ) v ∈ [ t,T ] , y, z, ψ (cid:1) ≤ l t + δ (cid:0) sup v ∈ [ t,T ] | q v | (cid:1) + β | y | + γ | z | + Z E j γ ( ψ ( e )) ν ( de ) d P ⊗ dt -a.e. ( ω, t ) ∈ Ω × [0 , T ] , where j γ ( u ) := γ ( e γu − − γu ) .(iii) || ξ || ∞ , || l || S ∞ < ∞ . emma 3.1. Under Assumption 3.1, if there exists a bounded solution ( Y, Z, ψ ) ∈ S ∞ × H × J to the ABSDE (3.1), then Z ∈ H BMO and ψ ∈ J BMO (hence ψ ∈ J ∞ ) and they satisfy || Z || H BMO ≤ e γ || Y || S ∞ γ (cid:16) γT (cid:2) || l || S ∞ + ( β + δ ) || Y || S ∞ (cid:3)(cid:17) , || ψ || J BMO ≤ e γ || Y || S ∞ γ (cid:16) γT (cid:2) || l || S ∞ + ( β + δ ) || Y || S ∞ (cid:3)(cid:17) + 4 || Y || S ∞ . Proof.
It follows from Lemma 3.1 [16] by a simple replacement of || l || S ∞ with || l || S ∞ + δ || Y || S ∞ .One also needs the fact that || ψ || J BMO ≤ || ψ || J B + || ψ || J ∞ and || ψ || J ∞ ≤ || Y || S ∞ fromLemma 2.1. We give details in Appendix B.1. Lemma 3.2.
Under Assumption 3.1, if there exists a bounded solution ( Y, Z, ψ ) ∈ S ∞ × H × J to the ABSDE (3.1), then Y has the following estimate || Y || S ∞ ≤ exp (cid:16) T (cid:0) β + δe βT (cid:1)(cid:17)(cid:0) || ξ || ∞ + T || l || S ∞ (cid:1) . Proof.
Applying Mayer-Ito formula, one obtains d ( e βs | Y s | ) = e βs (cid:16) β | Y s | ds + sign( Y s − ) dY s + dL s (cid:17) . Here, ( L s ) s ∈ [0 ,T ] is a non-decreasing process including a local time L c as dL s = dL cs + Z E (cid:16) | Y s − + ψ s ( e ) | − | Y s − | − sign( Y s − ) ψ s ( e ) (cid:17) µ ( ds, de ) . Note that | y + ψ | − | y | − sign( y ) ψ = | y + ψ | − sign( y )( y + ψ ) ≥ . (3.2)Let us introduce the following processes ( B s ) s ∈ [0 ,T ] and ( C s ) s ∈ [0 ,T ] by dB s = − sign( Y s ) E F s f (cid:0) s, ( Y v ) v ∈ [ s,T ] , Θ s ) ds + (cid:16) l s + δ E F s (cid:0) sup v ∈ [ s,T ] | Y v | (cid:1) + β | Y s | + γ | Z s | + Z E j γ (sign( Y s ) ψ s ( e )) ν ( de ) (cid:17) ds ,dC s = e βs ( dB s + dL s ) + γ e βs − e βs ) | Z s | ds + Z E (cid:16) j γ ( e βs sign( Y s ) ψ s ( e )) − e βs j γ (sign( Y s ) ψ s ( e )) (cid:17) ν ( de ) ds . Note that both of B and C are non-decreasing processes. As for the process B , this followsfrom Assumption 3.1. As for the process C , it follows from the fact that k ≥ j γ ( ku ) − kj γ ( u ) = γ ( e kγu − ke γu − k ) ≥
0, which makes the last line positive. One then sees d (cid:16) e βs | Y s | + Z s e βr (cid:0) l r + δ E F r ( sup v ∈ [ r,T ] | Y v | ) (cid:1) dr (cid:17) = e βs sign( Y s − ) (cid:16) Z s dW s + Z E ψ s ( e ) e µ ( ds, de ) (cid:17) − Z E j γ (cid:0) e βs sign( Y s ) ψ s ( e ) (cid:1) ν ( de ) ds − γ | e βs sign( Y s ) Z s | ds + dC s . (3.3)7e now investigate the process P t , t ∈ [0 , T ] defined by P t := exp (cid:16) γe βt | Y t | + γ Z t e βr (cid:0) l r + δ E F r ( sup v ∈ [ r,T ] | Y v | ) (cid:1) dr (cid:17) , t ∈ [0 , T ] , where P ∈ S ∞ is clearly seen. Applying Ito formula, one obtains that dP t = P t − γd (cid:16) e βt | Y t | + Z t e βr (cid:0) l r + δ E F r sup v ∈ [ r,T ] | Y v | (cid:1) dr (cid:17) + P t γ | e βt sign( Y t ) Z t | dt + P t − Z E (cid:16) e γe βt ( | Y t − + ψ t ( e ) |−| Y t − | ) − − γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de )= P t − (cid:16) γe βt sign( Y t ) Z t dW t + Z E (cid:16) exp (cid:0) γe βt sign( Y t − ) ψ t ( e ) (cid:1) − (cid:17)e µ ( dt, de ) + dC ′ t (cid:17) (3.4)where ( C ′ s ) s ∈ [0 ,T ] is another non-decreasing (see (3.2)) process defined by dC ′ t = γdC t + Z E (cid:16) e γe βt ( | Y t − + ψ t ( e ) |−| Y t − | ) − e γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de ) . (3.5)The details of the derivation of (3.4) are given in Appendix B.2.Since ( P, Y, Z, ψ ) ∈ S ∞ × S ∞ × H BMO × J BMO , one sees the process P is a true submartin-gale. Therefore, it follows that, for any t ∈ [0 , T ],exp (cid:16) γe βt | Y t | (cid:17) ≤ E F t " exp (cid:16) γe βT | ξ | + γ Z Tt e βr (cid:0) l r + δ E F r ( sup v ∈ [ r,T ] | Y v | ) (cid:1) dr (cid:17) ≤ exp (cid:16) γe βT ( || ξ || ∞ + T || l || S ∞ ) + γδe βT Z Tt || Y || S ∞ [ r,T ] dr (cid:17) a . s . Thus, | Y t | ≤ e βT ( || ξ || ∞ + T || l || S ∞ ) + δe βT Z Tt || Y || S ∞ [ r,T ] dr a . s . Since the right-hand side isnon-increasing in t , the same inequality holds with the left-hand side replaced by sup s ∈ [ t,T ] | Y s | .Hence equivalently, || Y || S ∞ [ t,T ] ≤ e βT ( || ξ || ∞ + T || l || S ∞ ) + δe βT Z Tt || Y || S ∞ [ r,T ] dr . Now using the backward Gronwall inequality , one obtains the desired result. Definition 3.1.
We define the set of parameters A := ( || ξ || ∞ , || l || S ∞ , δ, β, γ, T ) which controlthe universal bounds on ( || Y || S ∞ , || Z || H BMO , || ψ || J BMO ) . As a result of Lemmas 3.1 and 3.2, one sees the norms of || Y || S ∞ , || Z || H BMO , || ψ || J BMO are solely controlled by the set of parameters in A . In the next subsection, we introduce thelocal Lipschitz continuity. Assumption 3.2.
For each
M > , and for every ( q, y, z, ψ ) , ( q ′ , y ′ , z ′ , ψ ′ ) ∈ D [0 , T ] × R × R × d × L ( E, ν ) satisfying sup v ∈ [0 ,T ] | q v | , sup v ∈ [0 ,T ] | q ′ v | , | y | , | y ′ | , || ψ || L ∞ ( ν ) , || ψ ′ || L ∞ ( ν ) ≤ M , See, for example, Corollary 6.61 [31] here exists some positive constant K M (depending on M ) such that | f ( t, ( q v ) v ∈ [ t,T ] , y, z, ψ ) − f ( t, ( q ′ v ) v ∈ [ t,T ] , y ′ , z ′ , ψ ′ ) |≤ K M (cid:16) sup v ∈ [ t,T ] | q v − q ′ v | + | y − y ′ | + || ψ − ψ ′ || L ( ν ) (cid:17) + K M (cid:0) | z | + | z ′ | + || ψ || L ( ν ) + || ψ ′ || L ( ν ) (cid:1) | z − z ′ | (3.6) d P ⊗ dt -a.e. ( ω, t ) ∈ Ω × [0 , T ] . Remark 3.1.
Instead of directly making the driver f path-dependent, one can include theconditional expectations such as E F t ( Y t + δ ) , E F t ( R Tt Y s ds ) as done in [28]. In this work, weadopt the former approach since it allows the general dependence without specifying a concreteform. Let us introduce the two ABSDEs for t ∈ [0 , T ], with i = { , } , Y it = ξ i + Z Tt E F r f i (cid:16) r, ( Y iv ) v ∈ [ r,T ] , Y ir , Z ir , ψ ir (cid:17) dr − Z Tt Z ir dW r − Z Tt Z E ψ ir ( e ) e µ ( dr, de ) . (3.7)Let us put δY := Y − Y , δZ := Z − Z , δψ := ψ − ψ , and δf ( r ) := ( f − f ) (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1) . Then, we have the following stability result.
Proposition 3.1.
Suppose that the data ( ξ i , f i ) ≤ i ≤ satisfy Assumptions 3.1 and 3.2. Ifthe two ABSDEs (3.7) have bounded solutions ( Y i , Z i , ψ i ) ≤ i ≤ ∈ S ∞ × H × J , then for any p > q ∗ || δY || S p [0 ,T ] ≤ C E h | δξ | p + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p i p (3.8) and for any p ≥ , ¯ q ≥ q ∗ (cid:12)(cid:12)(cid:12)(cid:12) ( δY, δZ, δψ ) (cid:12)(cid:12)(cid:12)(cid:12) K p [0 ,T ] ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q i p ¯ q (3.9) where q ∗ ∈ (1 , ∞ ) is a constant depending only on ( K · , A ) , C = C ( p, K · , A ) and C =( p, ¯ q, K · , A ) are two positive constants.Proof. Note that one can apply (3.6) globally with fixed K M by choosing M larger than thebounds implied from Lemmas 3.1 and 3.2. Let fix such an M in the remainder. Define the R d -valued progressively measurable process ( b r , r ∈ [0 , T ]) by b r := E F r (cid:2) f ( r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r ) − f ( r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r ) (cid:3) | δZ r | δZ r =0 δZ ⊤ r . Since | b r | ≤ K M (cid:0) | Z r | + | Z r | + 2 || ψ r || L ( ν ) (cid:1) , there exists some constant C such that || b || H BMO ≤ C with C = C ( K · , A ). Thus one can define an equivalent probability measure Q by d Q d P = E T (cid:0)R · b ⊤ r dW r (cid:1) where E ( · ) is Dol´eans-Dade exponential. We have W Q = W − R · b r dr and the Poisson measure is unchanged, e µ Q = e µ . We also have d P d Q = E T (cid:0) − R · b ⊤ r dW Q r (cid:1) . FromRemark A.1, there exists some constant r ∗ ∈ (1 , ∞ ) such that the reverse H¨older inequalityholds for both of the E · ( R · b ⊤ r dW r ) and E · ( − R · b ⊤ r dW Q r ) with power ¯ r ∈ (1 , r ∗ ]. Define q ∗ > q ∗ := r ∗ / ( r ∗ − r ∗ , q ∗ ) are solely controlled by ( K · , A ).9nder the measure Q , we have δY t = δξ + Z Tt E F r h δf ( r ) + f (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1) − f (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1)i dr − Z Tt δZ r dW Q r − Z Tt Z E δψ r ( e ) e µ Q ( dr, de ) , t ∈ [0 , T ] . [Stability for Y] Applying Ito formula to δY , one obtains | δY t | + Z Tt | δZ r | dr + Z Tt Z E | δψ r ( e ) | µ ( dr, de )= | δξ | + Z Tt δY r E F r h δf ( r ) + f (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1) − f (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1)i dr − Z Tt δY r δZ r dW Q r − Z Tt Z E δY r − δψ r ( e ) e µ Q ( dr, de ) . (3.10)The last two terms are true Q -martingales, which can be checked by reverse H¨older andenergy inequalities. By taking conditional expectation E Q F t [ · ], one obtains with any λ > | δY t | + E Q F t Z Tt | Z r | dr + E Q F t Z Tt || δψ r || L ( ν ) dr ≤ C E Q F t Z Tt E F r (cid:2) || δY || r,T ] (cid:3) dr + E Q F t h | δξ | + 1 λ (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) + λ || δY || t,T ] i + 12 E Q F t Z Tt || δψ r || L ( ν ) dr with some positive constant C = C ( K · , A ). Here we have used the fact that | δY r | ≤ E F r (cid:2) || δY || [ r,T ] (cid:3) . Therefore, in particular, | δY t | ≤ E Q F t h | δξ | + 1 λ (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) + λ || δY || t,T ] + C Z Tt E F r (cid:2) || δY || r,T ] (cid:3) dr i = 1 E t E F t (cid:20) E T (cid:16) | δξ | + 1 λ (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) + λ || δY || t,T ] + C Z Tt E F r (cid:2) || δY || r,T ] (cid:3) dr (cid:17)(cid:21) where E s := E s ( R · b ⊤ r dW r ). Choosing ¯ q ∈ [ q ∗ , ∞ ), the reverse H¨older inequality yields | δY t | q ≤ C E F t h | δξ | q + 1 λ ¯ q (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) q + (cid:16)Z Tt E F r (cid:2) || δY || r,T ] (cid:3) dr (cid:17) ¯ q + λ ¯ q || δY || q [ t,T ] i ≤ C E F t h | δξ | q + 1 λ ¯ q (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) q + Z Tt || δY || q [ r,T ] dr + λ ¯ q || δY || q [ t,T ] i with some C = C (¯ q, K · , A ), where in the 2nd line Jensen’s inequality was used. For any p > q , applying Doob’s maximal inequality, one obtains E h || δY || p [ s,T ] i ≤ C E (cid:20) | δξ | p + 1 λ p (cid:16)Z Ts E F r | δf ( r ) | dr (cid:17) p (cid:21) + C Z Ts E h || δY || p [ r,T ] i dr + Cλ p E h || δY || p [ s,T ] i with C = C ( p, ¯ q, K · , A ). Choosing λ > Cλ p <
1, the backwardGronwall inequality implies E h sup t ∈ [ s,T ] | δY t | p i ≤ C E h | δξ | p + (cid:16)Z Ts E F r | δf ( r ) | dr (cid:17) p i , ∀ s ∈ [0 , T ] . p > q ∗ . This proves (3.8). Since 1 < q ∗ ≤ ¯ q , italso follows that E h sup t ∈ [0 ,T ] | δY t | p i p ≤ E h sup t ∈ [0 ,T ] | δY t | p ¯ q i p ¯ q ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q i p ¯ q (3.11)with C = C ( p, ¯ q, K · , A ) for any p ≥ [Stability for Z and ψ ] From (3.10), one has with C = C ( K · , A ), | δY t | + Z Tt | δZ r | dr + Z Tt Z E | δψ r ( e ) | µ ( dr, de ) ≤ | δξ | + (cid:16)Z Tt E F r | δf ( r ) | dr (cid:17) + || δY || t,T ] + C Z Tt E F r (cid:2) || δY || r,T ] (cid:3) dr + C Z Tt | δY r ||| δψ r || L ( ν ) dr − Z Tt δY r δZ r dW Q r − Z Tt Z E δY r − δψ r ( e ) e µ Q ( dr, de ) . For any p ≥
2, applying Burkholder-Davis-Gundy inequality and Lemma A.3, one can showthat there exists some constant C = C ( p, K · , A ) such that E Q h(cid:16)Z T | δZ r | dr (cid:17) p i + E Q h(cid:16)Z T Z E | δψ r ( e ) | µ ( dr, de ) (cid:17) p i ≤ C E Q h | δξ | p + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p + sup r ∈ [0 ,T ] E F r (cid:2) || δY || p [0 ,T ] (cid:3) + || δY || p [0 ,T ] i . Taking ¯ q ≥ q ∗ , the reverse H¨older and Doob’s maximal inequalities give E Q h(cid:16)Z T | δZ r | dr (cid:17) p i p + E Q h(cid:16)Z T Z E | δψ r ( e ) | µ ( dr, de ) (cid:17) p i p ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q + sup r ∈ [0 ,T ] E F r (cid:2) || δY || p [0 ,T ] (cid:3) ¯ q + || δY || p ¯ q [0 ,T ] i p ¯ q ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q + || δY || p ¯ q [0 ,T ] i p ¯ q . The reverse H¨older inequality implies || Z || H p + || ψ || J P ≤ C (cid:0) || Z || H p ¯ q ( Q ) + || ψ || J p ¯ q ( Q ) (cid:1) . Thus theestimate of (3.11) and Lemma A.3 give || δY || S p + || δZ || H p + || δψ || J p ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q i p ¯ q for any p ≥ q ≥ q ∗ with some positive constant C = C ( p, ¯ q, K · , A ).We also have the following relation. Lemma 3.3.
Under the same conditions used in Proposition 3.1, one has || δZ || H BMO + || δψ || J BMO ≤ C (cid:16) || δY || S ∞ + || δξ || ∞ + sup t ∈T T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E F t Z Tt | δf ( r ) | dr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ (cid:17) with some positive constant C = C ( K · , A ) . See, for example, Theorem 48 in IV.4. of [33]. roof. It follows from a simple modification of Lemma 3.3 (a) of [16].Combining the results in this section, we obtain the uniqueness.
Corollary 3.1.
Under Assumptions 3.1 and 3.2, if the ABSDE (3.1) has a bounded solution ( Y, Z, ψ ) ∈ S ∞ × H × J , then it is unique with respect to the norm S ∞ × H BMO × J BMO .Proof.
Proposition 3.1 implies the uniqueness of Y in S p , ∀ p ≥
2, in particular. This alsoimplies the uniqueness with respect to S ∞ . If not, there exists some c > || δY || S ∞ = c , which implies for any 0 < b < c , there exists a strictly positive constant a > P (sup t ∈ [0 ,T ] | δY t | > b ) = a . This yields || δY || p S p > b p a >
0, which is a contradiction. Thus theassertion follows from Proposition 3.1 and Lemma 3.3.
Remark 3.2.
For quadratic BSDEs, allowing the anticipated components of ( Z, ψ ) in thedriver f seems very hard. In fact, we cannot derive the stability result similar to Proposi-tion 3.1. This is because that the use of the reverse H¨older inequality makes the power of ( | Z | , | ψ | ) different in the left and right hand sides in the relevant inequalities after align-ing the probability measure of the conditional expectations to a single one. The anticipatedcomponent for Y is an exceptional case, where we can remove one conditional expectationby the simple fact Y t = E Q [ Y t |F t ] . Note that the Proposition 3.1 is necessary also for thenon-Markovian settings in Section 6. In the absence of the stability result, the convergenceusing the monotone sequence would be the last hope. However, to the best of our knowledge,no comparison principle is known in the presence of anticipated components of the controlvariables ( Z, ψ ) . Let us now provide the existence result for a Markovian setting. We introduce the followingforward process, for s ∈ [0 , T ], X t,xs = x + Z s ∨ tt b ( r, X t,xr ) dr + Z s ∨ tt σ ( r, X t,xr ) dW r + Z s ∨ tt Z E γ ( r, X t,xr − , e ) e µ ( dr, de ) (4.1)where x ∈ R n and b : [0 , T ] × R n → R n , σ : [0 , T ] × R n → R n × d , γ : [0 , T ] × R n × E → R n × k are non-random measurable functions. Note that X t,xs ≡ x for s ≤ t . Assumption 4.1.
There exists a positive constant K such that(i) | b ( t, | + | σ ( t, | ≤ K uniformly in t ∈ [0 , T ] .(ii) P ki =1 | γ i ( t, , e ) | ≤ K (1 ∧ | e | ) uniformly in ( t, e ) ∈ [0 , T ] × R .(iii) uniformly in t ∈ [0 , T ] , x, x ′ ∈ R n , e ∈ R , | b ( t, x ) − b ( t, x ′ ) | + | σ ( t, x ) − σ ( t, x ′ ) | ≤ K | x − x ′ | , k X i =1 | γ i ( t, x, e ) − γ i ( t, x ′ , e ) | ≤ K (1 ∧ | e | ) | x − x ′ | . Lemma 4.1.
Under Assumption 4.1, there exists a unique solution to (4.1) for each ( t, x )12 hich satisfies for any ( t, x ) , ( t, x ′ ) ∈ [0 , T ] × R n and p ≥ , ( a ) E h sup s ∈ [0 ,T ] | X t,xs | p i ≤ C (1 + | x | p )( b ) E h sup s ≤ u ≤ ( s + h ) ∧ T | X t,xs − X t,xu | p i ≤ C (1 + | x | p ) h, ∀ s ∈ [0 , T ]( c ) E h sup s ∈ [0 ,T ] | X t,xs − X t ′ ,x ′ s | p i ≤ C (cid:16) | x − x ′ | p + (1 + [ | x | ∨ | x ′ | ] p ) | t − t ′ | (cid:17) with some constant C = C ( p, K, T ) .Proof. They are the standard estimates for the Lipschitz SDEs. See, for example, Theorem4.1.1 [9]. For the selfcontainedness, we give a proof in Appendix B.3 for regularities.We are interested in the Markovian anticipated BSDE associated with ( X t,xv ) v ∈ [0 ,T ] : Y t,xs = ξ ( X t,xT ) + Z Ts r ≥ t E F r f (cid:16) r, X t,xr , ( Y t,xv ) v ∈ [ r,T ] , Y t,xr , Z t,xr , ψ t,xr (cid:17) dr − Z Ts Z t,xr dW r − Z Ts Z E ψ t,xr ( e ) e µ ( dr, de ) , (4.2)where f : [0 , T ] × R n × D [0 , T ] × R × R × d × L ( E, ν ) → R and ξ : R n → R are non-randommeasurable functions. Note that ( Y t,xs , Z t,xs , ψ t,xs ) ≡ ( Y t,xt , ,
0) for s ≤ t . Assumption 4.2. (i)The driver f is a map such that for every ( x, y, z, ψ ) ∈ R n × R × R × d × L ( E, ν ) and any c`adl`ag F -adapted process ( Y v ) v ∈ [0 ,T ] , the process (cid:0) E F t f ( t, x, ( Y v ) v ∈ [ t,T ] , y, z, ψ ) , t ∈ [0 , T ] (cid:1) is F -progressively measurable.(ii)For every ( x, q, y, z, ψ ) ∈ R n × D [0 , T ] × R × R × d × L ( E, ν ) , there exist constants β, δ ≥ , γ > and a positive non-random function l : [0 , T ] → R such that − l t − δ (cid:0) sup v ∈ [ t,T ] | q v | (cid:1) − β | y | − γ | z | − Z E j γ ( − ψ ( e )) ν ( de ) ≤ f (cid:0) t, x, ( q v ) v ∈ [ t,T ] , y, z, ψ (cid:1) ≤ l t + δ (cid:0) sup v ∈ [ t,T ] | q v | (cid:1) + β | y | + γ | z | + Z E j γ ( ψ ( e )) ν ( de ) dt -a.e. t ∈ [0 , T ] , where j γ ( u ) = γ ( e γu − − γu ) .(iii) || ξ ( · ) || ∞ , sup t ∈ [0 ,T ] ( l t ) < ∞ . Assumption 4.3.
For each
M > , and for every ( x, q, y, z, ψ ) , ( x ′ , q ′ , y ′ , z ′ , ψ ′ ) ∈ R n × D [0 , T ] × R × R × d × L ( E, ν ) satisfying | y | , | y ′ | , || ψ || L ∞ ( ν ) , || ψ ′ || L ∞ ( ν ) , sup v ∈ [0 ,T ] | q v | , sup v ∈ [0 ,T ] | q ′ v | ≤ M , there exist some positive constants K M (depending on M ) and K ξ ≥ , ρ ≥ , α ∈ (0 , such that, for dt -a.e. t ∈ [0 , T ] , • | f ( t, x, ( q v ) v ∈ [ t,T ] , y, z, ψ ) − f ( t, x, ( q ′ v ) v ∈ [ t,T ] , y ′ , z ′ , ψ ′ ) |≤ K M (cid:16) sup v ∈ [ t,T ] | q v − q ′ v | + | y − y ′ | + || ψ − ψ ′ || L ( ν ) (cid:17) + K M (cid:0) | z | + | z ′ | + || ψ || L ( ν ) + || ψ ′ || L ( ν ) (cid:1) | z − z ′ | , • | f ( t, x, ( q v ) v ∈ [ t,T ] , y, z, ψ ) − f ( t, x ′ , ( q v ) v ∈ [ t,T ] , y, z, ψ ) |≤ K M (cid:0) | x | ∨ | x ′ | ] ρ + | z | + || ψ || L ( ν ) (cid:1) | x − x ′ | α , and | ξ ( x ) − ξ ( x ′ ) | ≤ K ξ | x − x ′ | α . roposition 4.1. Under Assumptions 4.1, 4.2 and 4.3, suppose that there exists a boundedsolution ( Y t,x , Z t,x , ψ t,x ) ∈ S ∞ × H × J for each ( t, x ) ∈ [0 , T ] × R n . Then the solutionis unique and ( Y t,x , Z t,x , ψ t,x ) ∈ S ∞ × H BMO × J BMO with the norm solely controlled by A = ( || ξ || ∞ , sup t ∈ [0 ,T ] l t , δ, β, γ, T ) , which is, in particular, independent of ( t, x ) ∈ [0 , T ] × R n .Moreover, if Y t,xt is a deterministic map in ( t, x ) , the map u : [0 , T ] × R n → R defined by u ( t, x ) := Y t,xt satisfies for any pair of ( t, x ) , ( t ′ , x ′ ) ∈ [0 , T ] × R n , | u ( t, x ) − u ( t ′ , x ′ ) | ≤ C (cid:16) | x | ∨ | x ′ | ] ρ (cid:17)(cid:16) | x − x ′ | α + (cid:0) | x | ∨ | x ′ | ] α (cid:1) | t − t ′ | p ¯ q (cid:17) with some constant C = C ( α, ρ, p, ¯ q, K ξ , K, K · , A ) for any p ≥ and ¯ q ∈ [ q ∗ , ∞ ) such that αp ¯ q ≥ , where q ∗ > is some constant determined by ( K · , A ) .Proof. The first part follows from Lemmas 3.1, 3.2 and Corollary 3.1.Let us assume t ′ ≤ t without loss of any generality. Put δY := Y t,x − Y t ′ ,x ′ , δf ( r ) := r ≥ t f ( r, X t,xr , ( Y t,xv ) v ∈ [ r,T ] , Θ t,xr ) − r ≥ t ′ f ( r, X t ′ ,x ′ r , ( Y t,xv ) v ∈ [ r,T ] , Θ t,xr )= r ≥ t (cid:16) f ( r, X t,xr , ( Y t,xv ) v ∈ [ r,T ] , Θ t,xr ) − f ( r, X t ′ ,x ′ r , ( Y t,xv ) v ∈ [ r,T ] , Θ t,xr ) (cid:17) − t ′ ≤ r ≤ t f ( r, X t ′ ,x ′ r , ( Y t,xv ) v ∈ [ r,T ] , Y t,xt , , , and δξ = ξ ( X t,xT ) − ξ ( X t ′ ,x ′ T ). By Proposition 3.1, for any p ≥ , ¯ q ∈ [ q ∗ , ∞ ), | u ( t, x ) − u ( t ′ , x ′ ) | ≤ E h sup s ∈ [0 ,T ] | Y t,xs − Y t ′ ,x ′ s | p i p ≤ C E h | δξ | p ¯ q + (cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q i p ¯ q with C = C ( p, ¯ q, K · , A ). The universal bounds of Lemmas 3.1 and 3.2 imply that || Y t,x || S ∞ , || Z t,x || H BMO , || ψ t,x || J BMO ≤ C with some C = C ( A ) uniformly in ( t, x ). Thus one can applyfixed K M for the whole range in Assumption 4.3 provided M is chosen large enough. Itfollows that E F r | δf ( r ) | ≤ r ≥ t K M (cid:16) (cid:2) | X t,xr | ∨ | X t ′ ,x ′ r | (cid:3) ρ + | Z t,xr | + || ψ t,xr || L ( ν ) (cid:17) | X t,xr − X t ′ ,x ′ r | α + t ′ ≤ r ≤ t (cid:0) l + δ E F r [ || Y t,x || [ r,T ] ] + β | Y t,xt | (cid:1) . Hence, using the boundedness of Y t,x and Cauchy-Schwartz inequality, one obtains E h(cid:16)Z T E F r | δf ( r ) | dr (cid:17) p ¯ q i p ¯ q ≤ C E (cid:20) (cid:2) || X t,x || [0 ,T ] ∨ || X t ′ ,x ′ || [0 ,T ] (cid:3) ρp ¯ q + (cid:16)Z T | Z t,xr | + || ψ t,xr || L ( ν ) dr (cid:17) p ¯ q (cid:21) p ¯ q × E h || X t,x − X t ′ ,x ′ || αp ¯ q [0 ,T ] i p ¯ q + C | t − t ′ | . Note here that, by the energy inequality , the following relation holds: E h(cid:16)Z T | Z t,xr | + || ψ t,xr || L ( ν ) dr (cid:17) p ¯ q i p ¯ q ≤ C (4.3) See, for example, Lemma 2.2 [16]. As for a simple proof, see Lemma 9.6.5 [8]. C depends only on ( || Z || H BMO , || ψ || J BMO ) and p ¯ q . Using Lemma 4.1(a)and (c), one obtains the desired regularity. The contribution from δξ can be computedsimilarly. Remark 4.1.
Under the conditions of the above proposition, we have, for each s ∈ [0 , T ] , Y t,xs = Y s,X t,xs s = u ( s, X t,xs ) a.s. due to the uniqueness of solution Y t,x . Furthermore, sincethe function u is jointly continuous, u ( s, X t,xs ) s ∈ [0 ,T ] is c`adl`ag F -adapted. Thus, Chapter 1,Theorem 2 of [33] implies that Y t,xs = u ( s, X t,xs ) ∀ s ∈ [0 , T ] a.s. We now introduce a sequence of regularized anticipated BSDEs with m ∈ N : Y m,t,xs = ξ ( X t,xT ) + Z Ts r ≥ t E F r f m (cid:0) r, X t,xr , ( Y m,t,xv ) v ∈ [ r,T ] , Y m,t,xr , Z m,t,xr , ψ m,t,xr (cid:1) dr − Z Ts Z m,t,xr dW r − Z Ts Z E ψ m,t,xr ( e ) e µ ( dr, de ) (4.4)where f m is defined by, ∀ ( r, x, q, y, z, ψ ) ∈ [0 , T ] × R n × D [0 , T ] × R × R × d × L ( E, ν ), f m (cid:0) r, x, ( q s ) v ∈ [ r,T ] , y, z, ψ (cid:1) := f (cid:0) r, x, ( ϕ m ( q s )) v ∈ [ r,T ] , ϕ m ( y ) , ϕ m ( z ) , ϕ m ( ψ ◦ ζ m ) (cid:1) . (4.5)Here, we have used a simple truncation function ϕ m ( x ) := − m for x ≤ − mx for | x | ≤ mm for x ≥ m and a cutoff function ψ ◦ ζ m ( e ) := ψ ( e ) | e |≥ /m , which are applied component-wise for z, ψ . Lemma 4.2.
Suppose that the driver f satisfies Assumptions 4.2 and 4.3. Then, ( f m ) m ∈ N also satisfy Assumptions 4.2 and 4.3 uniformly in m ∈ N . Moreover, for each m ∈ N , thedriver f m is a.e. bounded and globally Lipschitz continuous with respect to ( q, y, z, ψ ) in thesense of Assumption C.1.Proof. With | ϕ m ( x ) | ≤ | x | , | ϕ m ( x ) − ϕ m ( x ′ ) | ≤ | x − x ′ | and use the convexity of the function j γ ( · ), the first claim is obvious. By denoting C m := max ≤ k ≤ R | e |≥ /m ν i ( de ) < ∞ , one sees | f m | ≤ sup t ∈ [0 ,T ] l t + ( δ + β ) m + γ dm + kj γ ( m ) C m a.e. by the structure condition. Bynoticing the fact that || ϕ m ( ψ ◦ ζ m ) || L ( ν ) ≤ k X i =1 m Z | e |≥ /m ν i ( de ) ≤ km C m the global Lipschitz continuity can be confirmed easily. Lemma 4.3.
There exists a unique solution ( Y m,t,x , Z m,t,x , ψ m,t,x ) to (4.4) satisfying || Y m,t,x || S ∞ , || Z m,t,x || H BMO , || ψ m,t,x || J BMO ≤ C with some constant C = C ( A ) , depending only on those relevant for the universal bound,uniformly in ( m, t, x ) ∈ N × [0 , T ] × R n . Moreover, ( Y m,t,xs , Z m,t,xs , ψ m,t,xs ; s ∈ [ t, T ]) is adaptedto the σ -algebra F ts generated by ( W, µ ) after t , that is, F ts = σ (cid:0) W u − W t , µ (( t, u ] , · ); t ≤ u ≤ s (cid:1) for each s ∈ [ t, T ] . In particular, Y m,t,xt is deterministic in ( t, x ) . roof. Thanks to Lemma 4.2, Proposition C.1 is applicable to (4.4), which implies that thereexists a unique solution ( Y m,t,x , Z m,t,x , ψ m,t,x ) ∈ K [0 , T ] of (4.4). Since | ξ | and | f m | arebounded, we actually have Y m,t,x ∈ S ∞ . Therefore, Lemmas 4.2, 3.1 and 3.2 imply thedesired bound || Y m,t,x || S ∞ , || Z m,t,x || H BMO , || ψ m,t,x || J BMO ≤ C uniformly in ( m, t, x ) ∈ N × [0 , T ] × R n . This proves the first part.We can prove the latter claims by following the same idea given in Proposition 4.2 [11]or Theorem 9.5.6 [8]. Consider the shifted Brownian motion and Poisson random measure( W ′ , µ ′ ) defined by W ′ s := W t + s − W t , µ ′ ((0 , s ] , · ) := µ (( t, t + s ] , · ) , ≤ s ≤ T − t , as wellas their associated filtration F ′ s := F tt + s . Let ( X ′ (0 ,x ) s , ≤ s ≤ T − t ) be the solution to thefollowing SDE: X ′ (0 ,x ) s = x + Z s b ( r + t, X ′ (0 ,x ) r ) dr + Z s σ ( r + t, X ′ (0 ,x ) r ) dW ′ r + Z s Z E γ ( r + t, X ′ (0 ,x ) r + t − , e ) e µ ′ ( dr, de )where e µ ′ is the compensated measure for µ ′ . By the strong uniqueness of the SDE, X ′ (0 ,x ) s − t = X t,xs for t ≤ s ≤ T P -a.s. Hence X t,xs is F ts (= F ′ s − t )-measurable.Similarly, let us consider the Lipschitz ABSDE for s ∈ [0 , T − t ]; Y ′ s = ξ ( X ′ (0 ,x ) T − t ) + Z T − ts E F ′ r f m ( r + t, X ′ (0 ,x ) r , ( Y ′ v ) v ∈ [ r,T − t ] , Y ′ r , Z ′ r , ψ ′ r ) dr − Z T − ts Z ′ r dW ′ r − Z T − ts Z E ψ ′ r ( e ) e µ ′ ( dr, de ) , where ( Y ′ , Z ′ , ψ ′ ) is the unique solution with respect to the filtration ( F ′ s ) s ∈ [0 ,T − t ] by Propo-sition C.1. Note here that the conditional expectation E F ′ r [ · ] = E F tr + t [ · ] applied to the drivercan be replaced by E F r + t [ · ] since the arguments of f m are adapted to ( F ′ s ) s ∈ [0 ,T − t ] and henceindependent of F t . Changing the integration variable to r + t → r ∈ [ t, T ], and using the factthat dW ′ r − t = dW r and e µ ′ ( d ( r − t ) , de ) = e µ ( dr, de ), one obtains Y ′ s = ξ ( X ′ (0 ,x ) T − t ) + Z Ts + t E F r f m ( r, X ′ (0 ,x ) r − t , ( Y ′ v − t ) v ∈ [ r,T ] , Y ′ r − t , Z ′ r − t , ψ ′ r − t ) dr − Z Ts + t Z ′ r − t dW r − Z Ts + t Z E ψ ′ r − t ( e ) e µ ( dr, de ) . Since X ′ (0 ,x ) s − t = X t,xs , one sees that ( Y ′ s − t , Z ′ s − t , ψ ′ s − t ; s ∈ [ t, T ]) is a solution to (4.4) on [ t, T ].Since the Lipschitz ABSDE has a unique solution by Proposition C.1 (Alternatively, onecan use the stability result in Proposition 3.1), ( Y ′ s − t , Z ′ s − t , ψ ′ s − t ) = ( Y m,t,xs , Z m,t,xs , ψ m,t,xs )a.s. for every s ∈ [ t, T ]. Thus, Y m,t,xs is F ts (= F ′ s − t )-measurable. In particular, Y m,t,xt is F tt (= F ′ )-measurable and hence deterministic by Blumenthal’s 0-1 law.We now provide our first main result. Theorem 4.1.
Under Assumptions 4.1, 4.2 and 4.3, there exists a unique solution ( Y t,x , Z t,x , ψ t,x ) ∈ S ∞ × H BMO × J BMO to the ABSDE (4.2) for each ( t, x ) ∈ [0 , T ] × R n .Proof. Since the uniqueness follows from the first part of Proposition 4.1, it suffices to provethe existence. Lemmas 4.2, 4.3 and Proposition 4.1 imply that the deterministic map u m :[0 , T ] × R n → R defined by u m ( t, x ) := Y m,t,xt satisfies the local H¨older continuity uniformly16n m with C = C ( α, ρ, p, ¯ q, K ξ , K, K · , A ) such that | u m ( t, x ) − u m ( t ′ , x ′ ) | ≤ C (cid:16) | x | ∨ | x ′ | ] ρ (cid:17)(cid:16) | x − x ′ | α + (cid:0) | x | ∨ | x ′ | ] α (cid:1) | t − t ′ | p ¯ q (cid:17) . (4.6)From Lemma 4.3, it is also clear that sup m ≥ sup ( t,x ) ∈ [0 ,T ] × R n | u m ( t, x ) | ≤ C .Let us now confirm the compactness result for ( u m ) m ∈ N . By defining the compact set K j with j ∈ N by K j := [0 , T ] × B j ( R n ) ⊂ R n +1 , we have S ∞ j =1 K j = [0 , T ] × R n . Here, B j ( R n ) is a closed ball in R n of radius j centered at the origin. Arzel`a-Ascoli theorem (see,Section 10.1 [34]) tells that there exists a subsequence ( m (1) ) ⊂ ( m ) such that, ∃ u (1) ∈ C ( K ),( u m (1) ) converges uniformly to u (1) on K . Since the sequence ( u m (1) ) is also bounded andequicontinuous, there exists a further subsequence ( m (2) ) ⊂ ( m (1) ) such that, ∃ u (2) ∈ C ( K ),( u m (2) ) converges uniformly to u (2) on K . By construction, it is clear that u (2) | K = u (1) .Continue the above procedures and construct a diagonal sequence as( m ( m ) ) m ≥ := { (1) , (2) , · · · , j ( j ) , · · · } . From Lemma 2 in Section 10.1 [34] implies that there exists a subsequence ( m ′ ) ⊂ ( m ( m ) )and some function u : [0 , T ] × R n → R such that ( u m ′ ) converges to u pointwise on the whole[0 , T ] × R n space. Moreover, the function u is actually continuous i.e. u ∈ C ([0 , T ] × R n ).In fact, by the above construction of the sequence ( m ( m ) ), ( u m ′ ) converges uniformly to thisfunction u on any compact subset K R .In the remainder, we work on the sequence ( m ′ ) (and possibly its further subsequences). Define the c`adl`ag F -adapted process ( Y t,xs ) s ∈ [0 ,T ] by Y t,xs := u ( s, X t,xs ) , ∀ ( ω, s ) ∈ Ω × [0 , T ].The uniform boundedness of ( u m ′ , u ), Lemma 4.1(a) and Chebyshev’s inequality give || Y m ′ ,t,x − Y t,x || p S p ≤ E h sup s ∈ [0 ,T ] (cid:12)(cid:12) u m ′ ( s, X t,xs ) − u ( s, X t,xs ) (cid:12)(cid:12) p { sup s ∈ [0 ,T ] | X t,xs |≤ R } i + C (cid:16) | x | j R j (cid:17) for every R > p, j ∈ N with some m ′ -independent constant C . For a given ǫ >
0, the2nd term becomes smaller than ǫ/ R large enough. Since ( u m ′ ) converges uniformlyto u on any compact set, the first term also becomes smaller than ǫ/ m ′ . Hence, || Y m ′ ,t,x − Y t,x || p S p ≤ ǫ for large m ′ . Thus one concludes Y m ′ ,t,x → Y t,x in S p for every p ∈ N .Since it implies sup s ∈ [0 ,T ] | Y m ′ ,t,x − Y t,xs | → m ′ → ∞ in probability, extracting furthersubsequence (still denoted by ( m ′ )), we have lim m ′ →∞ sup s ∈ [0 ,T ] | Y m ′ ,t,xs − Y t,xs | = 0 P -a.s. Inparticular, it means || Y m ′ ,t,x − Y t,x || S ∞ →
0. It also implies that ( Y m ′ ,t,x ) m ′ forms a Cauchysequence in S ∞ .With m , m ∈ ( m ′ ), let us put δY m ,m := Y m ,t,x − Y m ,t,x , δZ m ,m := Z m ,t,x − Z m ,t,x and δψ m ,m := ψ m ,t,x − ψ m ,t,x . Ito formula applied to | δY m ,m t | yields for any τ ∈ T T , E F τ (cid:20) | δY m ,m τ | + Z Tτ | δZ m ,m r | dr + Z Tτ Z E | δψ m ,m r ( e ) | µ ( dr, de ) (cid:21) = E F τ (cid:20)Z Tτ ∨ t δY m ,m r E F r h f m ( r, X t,xr , ( Y m ,t,xv ) v ∈ [ r,T ] , Θ m ,t,xr ) − f m ( r, X t,xr , ( Y m ,t,xv ) v ∈ [ r,T ] , Θ m ,t,xr ) i dr i E F τ Z Tτ | δZ m ,m r | dr + E F τ Z Tτ || δψ m ,m r || L ( ν ) dr ≤ || δY m ,m || S ∞ E F τ Z Tτ X i =1 | f m ( r, X t,xr , ( Y m i ,t,xv ) v ∈ [ r,T ] , Θ m i ,t,xr ) | dr . From Lemma 4.2 and Assumption 4.3, the conditional expectation of the 2nd line is boundedby C P i =1 (cid:0) || Y m i ,t,x || S ∞ + || Z m i ,t,x || H BMO + || ψ m i ,t,x || J BMO (cid:1) ≤ C , with C = C ( K · , A ).Thus the right-hand side converges to zero as m , m → ∞ uniformly in τ ∈ T T . Therefore ∃ ( Z t,x , ψ t,x ) ∈ H BMO × J BMO such that Z m ′ ,t,x → Z t,x in H BMO and ψ m ′ ,t,x → ψ t,x in J BMO .Proving that ( Y t,x , Z t,x , ψ t,x ) provides a solution of (4.2) can be done via the commonstrategy for the BSDEs. The above convergence results imply, a fortiori, that Z m ′ ,t,x → Z t,x in H and ψ m ′ ,t,x → ψ t,x in J . Thus we also have the convergence in measure for Z m ′ ,t,x → Z t,x and ψ m ′ ,t,x → ψ t,x with respect to d P ⊗ dt and d P ⊗ ν ( de ) ⊗ dt , respectively. As we have seenbefore, we also have sup s ∈ [0 ,T ] | Y m ′ ,t,x − Y t,xs | → σ -finite measure), there exists a subsequence(still denoted by ( m ′ )) that yields almost everywhere convergence for the associated measure.Therefore, one has sup s ∈ [0 ,T ] | Y m ′ ,t,xs − Y t,xs | → . s . , Z m ′ ,t,x → Z t,x d P ⊗ ds -a.e. and ψ m,t,x → ψ t,x d P ⊗ ν ( de ) ⊗ ds -a.e.Since f m → f locally uniformly, the above a.e. convergences and the Lipschitz continuityof the driver yields f m ′ ( s, X t,xs , ( Y m ′ ,t,xv ) v ∈ [ s,T ] , Θ m ′ ,t,xs ) → f ( s, X t,xs , ( Y t,xv ) v ∈ [ s,T ] , Θ t,xs ) d P ⊗ ds -a.e. In order to use the Lebesgue’s dominated convergence theorem, we first showthat there exists an appropriate subsequence ( m ′ ) such that G := sup m ′ | Z m ′ ,t,x | and H :=sup m ′ || ψ m ′ ,t,x || L ( ν ) are in L (Ω × [0 , T ]). Let us follow the idea of Lemma 2.5 in [23]. Since( Z m ′ ,t,x ) is a Cauchy sequence in H , one can extract a subsequence ( m ′ k ) k ∈ N such that forany k ∈ N , || Z m ′ k +1 ,t,x − Z m ′ k ,t,x || H ≤ − k . On the other hand, for any s ∈ [0 , T ], one easilysees that sup k ∈ N | Z m ′ k ,t,xs | ≤ | Z m ′ ,t,xs | + X k ∈ N | Z m ′ k +1 ,t,xs − Z m ′ k ,t,xs | . Taking the H -norm in the both side and using Minkowski’s inequality, E hZ T sup k ∈ N | Z m ′ k ,t,xs | ds i ≤ || Z m ′ ,t,x || H + X k ∈ N || Z m ′ k +1 ,t,x − Z m ′ k ,t,x || H ≤ || Z m ′ ,t,x || H + 1 < ∞ . Relabeling the subsequence by ( m ′ ), one obtains the desired result for G . Exactly the samemethod proves the integrability also for H . Now, since | f m ′ | ≤ C (1 + G + H ) a.s.with some C = C ( K · , A ), we have Z T | f m ′ ( r, X t,xr , ( Y m ′ ,t,xv ) v ∈ [ r,T ] , Θ m ′ ,t,xr ) − f ( r, X t,xr , ( Y t,xv ) v ∈ [ r,T ] , Θ t,xr ) | dr → . s . by the Lebesgue’s dominated convergence theorem.Finally, the BDG inequality and the same arguments using the convergence in probabilitymeasure also give sup s ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)R Ts ( Z m ′ ,t,xr − Z t,xr ) dW r (cid:12)(cid:12)(cid:12) → . s . and sup s ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)R Ts R E ( ψ m ′ ,t,xr ( e ) − t,xr ( e )) e µ ( dr, de ) (cid:12)(cid:12)(cid:12) → . s . under appropriate subsequences, which guarantees the convergencefor the stochastic integration. This finishes the proof. Remark 4.2.
By Theorem 4.1 as well as the uniqueness of the solution, Y t,xt is in factdeterministic in ( t, x ) . Remark 4.3.
In the above proof of Theorem 4.1, the convergence actually occurs in theentire sequence of ( m ) not only the subsequence ( m ′ ) . If this is not the case, there mustbe a subsequence ( ˆ m ) ⊂ ( m ) such that || Y m j ,t,x − Y t,x || S ∞ > c with some c > for every m j ∈ ( ˆ m ) . However, by repeating the same procedures done in the proof, we can extracta further subsequence ( ˆ m ′ ) ⊂ ( ˆ m ) such that, ∃ ( e Y t,x , e Z t,x , e ψ t,x ) , ( Y m j ,t,x , Z m j ,t,x , ψ m j ,t,x ) → ( e Y t,x , e Z t,x , e ψ t,x ) in S ∞ × H BMO × J BMO as ( ˆ m ′ ) ∋ m j → ∞ . One can show that it alsoprovides the solution to (4.2). By the uniqueness of solution, e Y t,x = Y t,x in S ∞ , whichcontradicts the assumption. Due to the general path-dependence of ( Y v ) v ≤ T in the driver, it is difficult to establish Malli-avin’s differentiability. Interestingly, we can apply the method similar to Lemma 15 in Fromm& Imkeller (2013) [14] or Lemma 2.5.14 in Fromm (2014) [15] to derive some useful regu-larity results on the control variables. The method only needs the fundamental Lebesgue’sdifferentiation theorem. Lemma 5.1.
Under Assumptions 4.1, 4.2 and 4.3 with α = 1 , the control variables of thesolution to the ABSDE (4.2) satisfy the estimate for every ( t, x ) | Z t,xs | ≤ C (cid:0) | X t,xs | ρ (cid:1) , || ψ t,xs || L ( ν ) ≤ C (cid:0) | X t,xs − | ρ (cid:1) for d P ⊗ ds -a.e. ( ω, s ) ∈ Ω × [0 , T ] with some constant C = C ( ρ, K ξ , K, K · , A ) .Proof. For notational simplicity, let us fix the initial data ( t, x ) and omit the associatedsuperscripts in the remainder of the proof. We start from the regularized ABSDE (4.4).Choose any s ′ ∈ [0 , T ) and define δW s := W s − W s ′ for s ∈ [ s ′ , T ]. An application of Itoformula to ( Y m δW ⊤ ) yields Y ms δW ⊤ s = Z ss ′ Z mr dr − Z ss ′ r ≥ t δW ⊤ r E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1) dr + Z ss ′ δW ⊤ r Z mr dW r + Z ss ′ Z E δW ⊤ r ψ mr ( e ) e µ ( dr, de ) + Z ss ′ Y mr dW ⊤ r . (5.1)Since ( Y m , Z m , ψ m ) ∈ S ∞ × H BMO × J BMO , one can show easily that the last three termsare true martingales. Notice that E hZ T (cid:12)(cid:12) r ≥ t W ⊤ r E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1)(cid:12)(cid:12) dr i ≤ C E h || W || ,T ] i E h(cid:16)Z T (cid:0) | Z mr | + || ψ mr || L ( ν ) (cid:1) dr (cid:17) i ≤ C See, for example, Section E.4, Theorem 6 [13]. C = C ( K · , A ). Thus Lebesgue’s differentiation theorem implies that,lim s ↓ s ′ s − s ′ Z ss ′ r ≥ t W ⊤ r E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1) dr = s ′ ≥ t W ⊤ s ′ E F s ′ f m (cid:0) s ′ , X s ′ , ( Y mv ) v ∈ [ s ′ ,T ] , Θ ms ′ (cid:1) a . s . for dt -a.e. s ′ ∈ [0 , T ). Similarly one obtains for dt -a.e. s ′ ∈ [0 , T ),lim s ↓ s ′ s − s ′ Z ss ′ Z mr dr = Z ms ′ a . s . lim s ↓ s ′ s − s ′ Z ss ′ r ≥ t E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1) dr = s ′ ≥ t E F s ′ f m (cid:0) s ′ , X s ′ , ( Y mv ) v ∈ [ s ′ ,T ] , Θ ms ′ (cid:1) dr a . s . Since Z m ∈ H , we can also take s ′ such that E [ | Z ms ′ | ] < ∞ a.e. in [0 , T ).As in Lemma 2.5.14 of [15], we introduce the stopping time τ : Ω → ( s ′ , T ] such that thefollowing inequalities hold for all s ∈ ( s ′ , T ]: • (cid:12)(cid:12)(cid:12) s − s ′ Z τ ∧ ss ′ Z mr dr (cid:12)(cid:12)(cid:12) ≤ | Z ms ′ | + 1 a . s . • (cid:12)(cid:12)(cid:12) s − s ′ Z τ ∧ ss ′ r ≥ t E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1) dr (cid:12)(cid:12)(cid:12) ≤ s ′ ≥ t (cid:12)(cid:12)(cid:12) E F s ′ f m (cid:0) s ′ , X s ′ , ( Y mv ) v ∈ [ s ′ ,T ] , Θ ms ′ (cid:1)(cid:12)(cid:12)(cid:12) + 1 a . s . • (cid:12)(cid:12)(cid:12) s − s ′ Z τ ∧ ss ′ r ≥ t W ⊤ r E F r f m (cid:0) r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr (cid:1) dr (cid:12)(cid:12)(cid:12) ≤ s ′ ≥ t (cid:12)(cid:12)(cid:12) W ⊤ s ′ E F s ′ f m (cid:0) s ′ , X s ′ , ( Y mv ) v ∈ [ s ′ ,T ] , Θ ms ′ (cid:1)(cid:12)(cid:12)(cid:12) + 1 a . s . Then one can show from (5.1) and the fact that τ ( ω ) ∧ s = s for sufficiently small s ∈ ( s ′ , T ], Z ms ′ = lim s ↓ s ′ E F s ′ h s − s ′ Y mτ ∧ s ( W τ ∧ s − W s ′ ) ⊤ i d P ⊗ dt -a.e. ( ω, s ′ ) ∈ Ω × [0 , T ) by the dominated convergence theorem. One sees (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ Y mτ ∧ s ( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ Y ms ( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ ( Y ms − Y mτ ∧ s )( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) , where the second term yields (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ ( Y ms − Y mτ ∧ s )( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) = E F s ′ h s − s ′ E F τ ∧ s (cid:2) Y ms − Y mτ ∧ s (cid:3) ( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) ≤ E F s ′ h s − s ′ Z sτ ∧ s E F τ ∧ s (cid:12)(cid:12) f m ( r, X r , ( Y mv ) v ∈ [ r,T ] , Θ mr ) | dr ( W τ ∧ s − W s ′ ) ⊤ i ≤ C m E F s ′ h | W τ ∧ s − W s ′ | i ≤ C m √ s − s ′ → s ↓ s ′ . | f m | is essentially bounded for each m (see Lemma 4.2).The first term gives the estimate with some constant C independent of m such that (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ u m ( s, X s )( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) E F s ′ h s − s ′ (cid:0) u m ( s, X s ) − u m ( s, X s ′ ) (cid:1) ( W τ ∧ s − W s ′ ) ⊤ i(cid:12)(cid:12)(cid:12) ≤ √ s − s ′ E F s ′ h | u m ( s, X s ) − u m ( s, X s ′ ) | i (by Cauchy-Schwartz) ≤ C √ s − s ′ E F s ′ h(cid:0) | X s | ∨ | X s ′ | ] ρ (cid:1) | X s − X s ′ | i (by (4.6) with α = 1) ≤ C √ s − s ′ E F s ′ h(cid:0) | X s ′ | ρ + | X s − X s ′ | ρ (cid:1) | X s − X s ′ | i (by | x | ∨ | y | ≤ | x − y | + | y | ) (5.2) ≤ C √ s − s ′ n (1 + | X s ′ | ρ ) E F s ′ (cid:2) | X s − X s ′ | ] + E F s ′ (cid:2) | X s − X s ′ | ρ ) (cid:3) o ≤ C (1 + | X s ′ | ρ ) a.s.where, in the last inequality, we have used a conditional version of Lemma 4.1(b) with theinitial value X s ′ . Thus we have d P ⊗ dt -a.e. | Z ms ′ | ≤ C (1 + | X s ′ | ρ )with C = C ( ρ, K ξ , K, K · , A ) uniformly in m . It is known from the proof of Theorem 4.1 that Z m → Z d P ⊗ dt -a.e. under an appropriate subsequence, and hence the first claim follows.The joint continuity of u implies Y s − = lim r ↑ s u ( r, X r ) = u ( s, X s − ) and hence Z E | ψ s ( e ) | ν ( de ) = Z E | u ( s, X s − + γ ( s, X s − , e )) − u ( s, X s − ) | ν ( de ) ≤ C Z E (cid:16) | X s − | ρ + | γ ( s, X s − , e ) | ρ (cid:17) | γ ( s, X s − , e ) | ν ( de ) ≤ C (1 + | X s − | ρ ) ) Z E | e | ν ( de ) ≤ C (1 + | X s − | ρ ) ) , which proves the second claim. In order to obtain the existence result in a non-Markovian setting, we need an additionalso-called A Γ -condition on the driver, which is rather restrictive but plays a crucial role inalmost every existing work on quadratic growth BSDEs with jumps. Assumption 6.1.
For each
M > , for every q ∈ D [0 , T ] , y ∈ R , z ∈ R × d , ψ, ψ ′ ∈ L ( E, ν ) with sup v ∈ [0 ,T ] | q v | , | y | , || ψ || L ∞ ( ν ) , || ψ ′ || L ∞ ( ν ) ≤ M there exists a P ⊗ E -measurable process Γ q,y,z,ψ,ψ ′ ,M such that, d P ⊗ dt -a.e., f (cid:0) t, ( q v ) v ∈ [ t,T ] , y, z, ψ ) − f ( t, ( q v ) v ∈ [ t,T ] , y, z, ψ ′ ) ≤ Z E Γ q,y,z,ψ,ψ ′ ,Mt ( e ) (cid:0) ψ ( e ) − ψ ′ ( e ) (cid:1) ν ( de ) with C M (1 ∧ | e | ) ≤ Γ q,y,z,ψ,ψ ′ ,Mt ( e ) ≤ C M (1 ∧ | e | ) with two M dependent constants satisfying C M > − and C M ≥ .
21e introduce a regularized ABSDE with some positive constant m > Y mt = ξ + Z Tt E F r f m (cid:0) r, ( Y mv ) v ∈ [ r,T ] , Y mr , Z mr , ψ mr (cid:1) dr − Z Tt Z mr dW r − Z Tt Z E ψ mr ( e ) e µ ( dr, de ) , t ∈ [0 , T ] (6.1)with the definition f m (cid:0) t, ( q v ) v ∈ [ t,T ] , y, z, ψ (cid:1) := f (cid:0) t, ( ϕ m ( q v )) v ∈ [ t,T ] , y, z, ψ (cid:1) for every ( ω, t, q, y, z, ψ ) ∈ Ω × [0 , T ] × D [0 , T ] × R × R × d × L ( E, ν ). ϕ m is the truncation function used previously. Lemma 6.1.
If the driver f satisfies Assumptions 3.1, 3.2 and 6.1, then the driver f m defined above also satisfies the same conditions uniformly in m . Moreover, if there exists abounded solution ( Y m , Z m , ψ m ) ∈ S ∞ × H × J to the ABSDE (6.1), then it is unique andbelongs to S ∞ × H BMO × J BMO with the norms || Y m || S ∞ , || Z m || H BMO , || ψ m || J BMO ≤ C withsome constant C depending only on A = ( || ξ || ∞ , || l || S ∞ , δ, β, γ, T ) .Proof. The first claim is obvious. The second claim follows from Lemmas 3.1, 3.2 and Corol-lary 3.1.
Theorem 6.1.
Under Assumptions 3.1, 3.2 and 6.1, there exists a unique solution ( Y, Z, ψ ) ∈ S ∞ × H BMO × J BMO to the ABSDE (3.1).Proof.
Uniqueness follows from Corollary 3.1. Notice that it suffices to prove the existenceof solution ( Y m , Z m , ψ m ) ∈ S ∞ × H BMO × J BMO of (6.1) for each m . In fact, by choosing m bigger than the bound given in Lemma 3.2, one sees ( Y m , Z m , ψ m ) actually provides thesolution for (3.1). Let fix such an m in the remainder.Let us put Y m, ≡ n ∈ N such that Y m,nt = ξ + Z Tt E F r f m (cid:0) r, ( Y m,n − v ) v ∈ [ r,T ] , Y m,nr , Z m,nr , ψ m,nr (cid:1) dr − Z Tt Z m,nr dW r − Z Tt Z E ψ m,nr ( e ) e µ ( dr, de ) , t ∈ [0 , T ] . (6.2)The driver for the BSDE (6.2) can be seen as e f m ( r, y, z, ψ ) := E F r f (cid:0) r, ( Y m,n − v ) v ∈ [ r,T ] , y, z, ψ (cid:1) .By replacing l r by l r + δm , one sees the data ( ξ, e f m ) satisfy Assumptions 3.1, 3.2 and4.1 in [16] for non-anticipated quadratic-exponential growth BSDEs. Therefore, Theorem4.1 [16] implies that there exists a (unique) solution ( Y m,n , Z m,n , ψ m,n ) ∈ S ∞ × H BMO × J BMO for each n ≥
1. Furthermore, as a special case of the universal bounds, one sees || Y m,n || S ∞ , || Z m,n || H BMO , || ψ m,n || J BMO ≤ C with C = C ( || ξ || ∞ , || l || S ∞ + δm, β, γ, T ).Let denote δY m,n := Y m,n − Y m,n − . Replacing l r by l r + δm , then putting δ = 0,and considering the drivers f ( r, y, z, ψ ) := f m ( r, ( Y m,nv ) v ∈ [ r,T ] , y, z, ψ ), f ( r, y, z, ψ ) := f m ( r, ( Y m,n − v ) v ∈ [ r,T ] , y, z, ψ ), one sees that ( f i ) i =1 satisfy Assumptions 3.1 and 3.2. Thusone can apply the stability results in Proposition 3.1 to the BSDE (6.2). In particular, by(3.8), one has for any p ≥ q ∗ and 0 < h ≤ T , E h sup t ∈ [ T − h,T ] | δY m,n +1 t | p i ≤ C E h(cid:16)Z TT − h E F r (cid:12)(cid:12) f m ( r, ( Y m,nv ) v ∈ [ r,T ] , Θ m,n +1 r ) − f m ( r, ( Y m,n − v ) v ∈ [ r,T ] , Θ m,n +1 r ) (cid:12)(cid:12) dr (cid:17) p i ≤ Ch p E h sup t ∈ [ T − h,T ] | δY m,nt | p i C = C ( p, K · , || ξ || ∞ , || l || S ∞ + δm, β, γ, T ). By choosing h small enoughso that Ch p <
1, it becomes a strict contraction and thus ( Y m,nv , v ∈ [ T − h, T ]) n ≥ forms aCauchy sequence in S p [ T − h, T ].By extracting an appropriate subsequence ( n ′ ) ⊂ ( n ), one has || δY m,n ′ || S ∞ [ T − h,T ] → n ′ → ∞ . Applying Ito formula to ( δY m,n ′ ) and repeating the same procedures used inlast part of the proof in Theorem 4.1, one can show that ∃ ( Y m , Z m , ψ m ) ∈ ( S ∞ × H BMO × J BMO ) [ T − h,T ] , ( Y m,n ′ , Z m,n ′ , ψ m,n ′ ) → ( Y m , Z m , S m ) in the corresponding norm, and that( Y mv , Z mv , ψ mv ) v ∈ [ T − h,T ] solves the ABSDE (6.1) for the period [ T − h, T ]. Now, let us replace ( Y m,n , Z m,n , ψ m,n ) n ∈ N by ( Y m , Z m , ψ m ) for ( ω, s ) ∈ Ω × [ T − h, T ] in(6.2). Then for t ≤ T − h , we have Y m,nt = Y mT − h + Z T − ht E F r f m (cid:0) r, ( Y m,n − v ) v ∈ [ r,T ] , Y m,nr , Z m,nr , ψ m,nr (cid:1) dr − Z T − ht Z m,nr dW r − Z T − ht Z E ψ m,nr ( e ) e µ ( dr, de ) . An application of Proposition 3.1 with the data ( Y mT − h , f ) , ( Y mT − h , f ) yields, E h sup t ∈ [ T − h,T − h ] | δY m,n +1 | p i ≤ C E h(cid:16)Z T − hT − h E F r | δf ( r ) | dr (cid:17) p i ≤ Ch p E h sup t ∈ [ T − h,T ] | δY m,nt | p i = Ch p E h sup t ∈ [ T − h,T − h ] | δY m,nt | p i where the fact Y m,ns = Y ms , s ∈ [ T − h, T ] is used in the 2nd line. Thus one can extend thesolution to the period [ T − h, T − h ] by the same procedures used in the previous step. Sincecoefficient C can be taken independently of the specific period, the whole period [0 , T ] canbe covered by a finite number of partitions. Notice here that, as one can see from the proofof Proposition 3.1, the coefficient C depends on the essential supremum of the terminal value || ξ || ∞ only through the local Lipschitz constant K M and the universal bounds controlling M as well as the coefficients of the reverse H¨older inequality. Hence the appearance of the newterminal value Y mT − h does not change the size of the coefficient C . This finishes the proof forthe existence of a bounded solution to (6.1) for each m . For completeness, we give a sufficient condition for the comparison principle to hold for ourABSDE in the rest of this section. In non-anticipated settings, i.e. when there is no futurepath-dependence of ( Y v ) v ∈ [0 ,T ] in the driver f , it is known that the comparison principle holdsfor quadratic-exponential growth BSDEs in the presence of A Γ -condition (See, Lemma D.1.).For the current anticipated setting, we need an additional assumption same as the one usedin Theorem 5.1 of [32]. Consider the two ABSDEs with i ∈ { , } , Y it = ξ i + Z Tt E F r f i (cid:0) r, ( Y iv ) v ∈ [ r,T ] , Y ir , Z ir , ψ ir (cid:1) dr − Z Tt Z ir dW r − Z Tt Z E ψ ir ( e ) e µ ( dr, de )for t ∈ [0 , T ]. Theorem 6.2.
Suppose the data ( ξ i , f i ) ≤ i ≤ satisfy Assumptions 3.1, 3.2 and 6.1. Moreover, f is increasing in ( q v ) v ∈ [0 ,T ] , i.e. f ( r, ( q v ) v ∈ [ r,T ] , y, z, ψ ) ≤ f ( r, ( q ′ v ) v ∈ [ r,T ] , y, z, ψ ) for every Thanks to the uniqueness of the solution of (6.1), the same arguments used in Remark 4.3 guarantee thatthe above convergence actually occurs in the entire sequence ( n ). r, y, z, ψ ) ∈ [0 , T ] × R × R × d × L ( E, ν ) and q, q ′ ∈ D [0 , T ] , if q v ≤ q ′ v ∀ v ∈ [ r, T ] . If ξ ≤ ξ a.s. and f ( r, ( q v ) v ∈ [ r,T ] , y, z, ψ ) ≤ f ( r, ( q v ) v ∈ [ r,T ] , y, z, ψ ) d P ⊗ dr -a.e. for every ( q, y, z, ψ ) ∈ D [0 , T ] × R × R × d × L ( E, ν ) , then Y t ≤ Y t ∀ t ∈ [0 , T ] a.s.Proof. Firstly, let us regularize the driver f by f ′ defined as, for every ( r, q, y, z, ψ ), f ′ ( r, ( q v ) r ∈ [ t,T ] , y, z, ψ ) := f (cid:0) r, ( ϕ m ( q v )) v ∈ [ r,T ] , y, z, ψ (cid:1) (6.3)with some truncation level m satisfying m > ( || Y || S ∞ ∨ || Y || S ∞ ). Consider a sequence ofnon-anticipated BSDEs with n ∈ N by Y ,nt = ξ + Z Tt E F r f ′ (cid:0) r, ( Y ,n − v ) v ∈ [ r,T ] , Y ,nr , Z ,nr , ψ ,nr (cid:1) dr − Z Tt Z ,nr dW r − Z Tt Z E ψ ,nr ( e ) e µ ( dr, de ) , t ∈ [0 , T ] (6.4)under the condition Y , = Y . By the proof of Theorem 6.1, there exists h > Y ,n , Z ,n , ψ ,n ) → ( Y , Z , ψ ) in S ∞ × H BMO × J BMO as n → ∞ for the period [ T − h, T ].Note that the constraint ϕ m ( · ) becomes passive at least for large enough n .Firstly, let us focus on the period [ T − h, T ]. Set e f ( r, y, z, ψ ) = E F r f ( r, ( Y v ) v ∈ [ r,T ] , y, z, ψ )and e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y v ) v ∈ [ r,T ] , y, z, ψ ) = E F r f ( r, ( Y v ) v ∈ [ r,T ] , y, z, ψ ). ApplyingLemma D.1, one obtains Y t = Y , t ≤ Y , t ∀ t ∈ [ T − h, T ] a.s. Then using the new definition e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y , v ) v ∈ [ r,T ] , y, z, ψ ) , e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y , v ) v ∈ [ r,T ] , y, z, ψ ) , and the hypothesis that the driver is increasing in q ∈ D [0 , T ], Lemma D.1 yields Y , t ≤ Y , ∀ t ∈ [ T − h, T ] a.s. By repeating the same arguments, one sees Y t ≤ Y ,n − t ≤ Y ,nt ∀ t ∈ [ T − h, T ] a.s. for every n ∈ N . Since Y ,n converges to Y in S ∞ [ T − h, T ], one concludes Y t ≤ Y t ∀ t ∈ [ T − h, T ] a.s.Let us now replace Y ,nt by Y t for all t ∈ [ T − h, T ] in (6.4), and consider a sequence ofnon-anticipated BSDEs n ∈ N Y ,nt = Y T − h + Z T − ht E F r f ′ (cid:0) r, ( Y ,n − v ) v ∈ [ r,T ] , Y ,nr , Z ,nr , ψ ,nr (cid:1) dr − Z T − ht Z ,nr dW r − Z T − ht Z E ψ ,nr ( e ) e µ ( dr, de )with the initial condition Y , t = ( Y t , t ∈ [0 , T − h ) Y t , t ∈ [ T − h, T ] for the next short period t ∈ [ T − h, T − h ]. By the result of the previous step, one has Y t ≤ Y , t ∀ t ∈ [ T − h, T ] a.s. Now, letus set e f ( r, y, z, ψ ) = E F r f ( r, ( Y v ) v ∈ [ r,T ] , y, z, ψ ), e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y , v ) v ∈ [ r,T ] , y, z, ψ ),where the latter is equal to E F r f ( r, ( Y , v ) v ∈ [ r,T ] , y, z, ψ ). By applying Lemma D.1 to the data( Y T − h , e f ), ( Y T − h , e f ), one obtains Y t ≤ Y , t ∀ t ∈ [ T − h, T − h ] a.s. Since Y , t = Y t for t ∈ [ T − h, T ], one concludes Y t ≤ Y , t ≤ Y , t ∀ t ∈ [ T − h, T ] a.s. Similarly,applying Lemma D.1 with e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y ,n − v ) v ∈ [ r,T ] , y, z, ψ ), e f ( r, y, z, ψ ) = E F r f ′ ( r, ( Y ,n − v ) v ∈ [ r,T ] , y, z, ψ ) yields Y ,n − t ≤ Y ,nt ∀ t ∈ [ T − h, T ] a.s. for every n ≥
2. Asin the previous step, the proof of Theorem 6.1 implies Y ,n → Y in S ∞ [ T − h, T − h ]. Since24 ,nt = Y t for t ∈ [ T − h, T ] by construction, one actually has Y ,n → Y in S ∞ [ T − h, T ].It follows that Y t ≤ Y t ∀ t ∈ [ T − h, T ] a.s. Repeating the same procedures finite numberof times, one obtains the desired result. A Some preliminary results
Let us remind some important properties of BMO-martingales. For our purpose, it is enoughto focus on continuous ones. When Z ∈ H BMO , M · := R · Z r dW r is a continuous BMO-martingale with || M || BMO = || Z || H BMO . Lemma A.1 (reverse H¨older inequality) . Let M be a continuous BMO-martingale. Then,Dol´eans-Dade exponential (cid:0) E t ( M ) , t ∈ [0 , T ] (cid:1) is a uniformly integrable martingale, and forevery stopping time τ ∈ T T , there exists some r > such that E [ E T ( M ) r |F τ ] ≤ C E τ ( M ) r with some positive constant C = C ( r, || M || BMO ) .Proof. See Kazamaki (1979) [19], and also Remark 3.1 of Kazamaki (1994) [20].
Lemma A.2.
Let M be a square integrable continuous martingale and ˆ M := h M i − M .Then, M ∈ BM O ( P ) if and only if ˆ M ∈ BM O ( Q ) with d Q /d P = E T ( M ) . Furthermore, || ˆ M || BMO ( Q ) is determined by some function of || M || BMO ( P ) and vice versa.Proof. See Theorem 3.3 and Theorem 2.4 in [20].
Remark A.1.
For continuous martingales, Theorem 3.1 [20] also tells that there exists somedecreasing function Φ( r ) with Φ(1+) = ∞ and Φ( ∞ ) = 0 such that if || M || BMO ( P ) satisfies || M || BMO ( P ) < Φ( r ) then E ( M ) satisfies the reverse H¨older inequality with power r . Thisimplies together with Lemma A.2, one can take a common positive constant ¯ r satisfying < ¯ r ≤ r ∗ such that both of the E ( M ) and E ( ˆ M ) satisfy the reverse H¨older inequality withpower ¯ r under the respective probability measure P and Q . Furthermore, the upper bound r ∗ is determined only by || M || BMO ( P ) (or equivalently by || M || BMO ( Q ) ). Let us also remind the following result.
Lemma A.3. (Chapter 1, Section 9, Lemma 6 [24]) For any Ψ ∈ J p with p ≥ , there existssome constant C = C ( p ) such that E h(cid:16)Z T Z E | Ψ r ( e ) | ν ( de ) dr (cid:17) p i ≤ C E h(cid:16)Z T Z E | Ψ r ( e ) | µ ( dr, de ) (cid:17) p i . Lemma A.4. (Lemma 5-1 of Bichteler, Gravereaux and Jacod (1987) [5]) Let η : R → R bedefined by η ( e ) = 1 ∧ | e | . Then, for ∀ p ≥ , there exists a constant δ p depending on p, T, n, k such that E " sup t ∈ [0 ,T ] (cid:12)(cid:12)(cid:12)Z t Z E U ( s, e ) e µ ( ds, de ) (cid:12)(cid:12)(cid:12) p ≤ δ p Z T E | L s | p ds if U is an R n × k -valued P ⊗ E -measurable function on Ω × [0 , T ] × E and L is a predictableprocess satisfying | U i · ( ω, s, e ) | ≤ L s ( ω ) η ( e ) for each column ≤ i ≤ k . B Technical details omitted in the main text
In the main text, we have omitted some technical details in order not to interrupt the mainstory. In this section, let us give the omitted details for completeness.25 .1 Details of the proof of Lemma 3.1
By assumption, we have Y ∈ S ∞ . Since || ψ || J ∞ ≤ || Y || S ∞ , ψ is bounded. Ito formula appliedto e γY t yields, for any F -stopping time τ ∈ T T , E F τ (cid:20)Z Tτ e γY s γ | Z s | ds + Z Tτ Z E e γY s ( e γψ s ( e ) − ν ( de ) ds (cid:21) = E F τ (cid:20) e γY T − e γY τ + 2 γ Z Tτ e γY s (cid:16) E F s f (cid:0) s, ( Y v ) v ∈ [ s,T ] , Y s , Z s , ψ s (cid:1) − Z E j γ ( ψ s ( e )) ν ( de ) (cid:17) ds (cid:21) ≤ E F τ (cid:20) e γY T − e γY τ + 2 γ Z Tτ e γY s (cid:16) l s + δ || Y || [ s,T ] + β | Y s | + γ | Z s | (cid:17) ds (cid:21) where structure condition in Assumption 3.1 was used in the third line. Then it yields E F τ (cid:20)Z Tτ e γY s γ | Z s | ds + Z Tτ Z E e γY s ( e γψ s ( e ) − ν ( de ) ds (cid:21) ≤ e γ || Y || S ∞ + 2 γe γ || Y || S ∞ T (cid:16) || l || S ∞ + ( β + δ ) || Y || S ∞ (cid:17) . Since e − γ || Y || S ∞ ≤ e ± γY ≤ e γ || Y || S ∞ , E F τ (cid:20)Z Tτ γ | Z s | ds + Z Tτ Z E ( e γψ s ( e ) − ν ( de ) ds (cid:21) ≤ e γ || Y || S ∞ + 2 γe γ || Y || S ∞ T (cid:16) || l || S ∞ + ( β + δ ) || Y || S ∞ (cid:17) . In particular, this leads to the desired bound on || Z || H BMO .Repeating the same calculation on e − γY t , one obtains the next estimate: E F τ (cid:20)Z Tτ γ | Z s | ds + Z Tτ Z E ( e − γψ s ( e ) − ν ( de ) ds (cid:21) ≤ e γ || Y || S ∞ + 2 γe γ || Y || S ∞ T (cid:16) || l || S ∞ + ( β + δ ) || Y || S ∞ (cid:17) . Noticing the fact that ( e x − + ( e − x − ≥ x , ∀ x ∈ R , one obtains || ψ || J B ≤ e γ || Y || S ∞ γ (cid:16) γT (cid:0) || l || S ∞ + ( β + γ ) || Y || S ∞ (cid:1)(cid:17) . Finally, the relation || ψ || J BMO ≤ || ψ || J B + || ψ || J ∞ ≤ || ψ || J B + 4 || Y || S ∞ proves the desired estimate on || ψ || J BMO . 26 .2 Derivation of (3.4)
Using (3.3), one gets dP t = P t − γd (cid:16) e βt | Y t | + Z t e βr (cid:0) l r + δ E F r sup v ∈ [ r,T ] | Y v | (cid:1) dr (cid:17) + P t γ | e βt sign( Y t ) Z t | dt + P t − Z E (cid:16) e γe βt ( | Y t − + ψ t ( e ) |−| Y t − | ) − − γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de )= P t − Z E (cid:16) e γe βt ( | Y t − + ψ t ( e ) |−| Y t − | ) − − γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de )+ P t γ | e βt sign( Y t ) Z t | dt + P t − γ (cid:26) e βt sign( Y t ) Z t dW t + Z E e βt sign( Y t − ) ψ t ( e ) e µ ( dt, de ) − Z E j γ (cid:0) e βt sign( Y t ) ψ t ( e ) (cid:1) ν ( de ) dt − γ | e βt sign( Y t ) Z t | dt + dC t (cid:27) . Separating the terms contained in dC ′ (3.5) and canceling | Z | -term, one obtains dP t = P t − (cid:26) γdC t + Z E (cid:16) e γe βt ( | Y t − + ψ t ( e ) |−| Y t − | ) − e γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de ) (cid:27) + P t − Z E (cid:16) e γe βt sign( Y t − ) ψ t ( e ) − − γe βt sign( Y t − ) ψ t ( e ) (cid:17) µ ( dt, de )+ P t − γ (cid:26) e βt sign( Y t ) Z t dW t + Z E e βt sign( Y t − ) ψ t ( e ) e µ ( dt, de ) − Z E j γ (cid:0) e βt sign( Y t ) ψ t ( e ) (cid:1) ν ( de ) dt (cid:27) . Notice that the terms inside a parenthesis in the second line are equal to γj γ (cid:0) e βt sign( Y t − ) ψ t ( e ) (cid:1) ,which then yields dP t = P t − dC ′ t + P t − Z E γj γ (cid:0) e βt sign( Y t − ) ψ t ( e ) (cid:1)e µ ( dt, de )+ P t − (cid:26) γe βt sign( Y t ) Z t dW t + Z E γe βt sign( Y t − ) ψ t ( e ) e µ ( dt, de ) (cid:27) . Using the definition of j γ ( · ), one obtains the desired expression (3.4). B.3 The proof for Lemma 4.1
The existence of unique solution X t,x ∈ S p , ∀ p ≥ ∀ ( t, x ) ∈ [0 , T ] × R n is well known forthe Lipschitz SDEs with jumps. Hence, we only provide a proof for the relevant continuitiesbelow.(a) For any s ∈ [ t, T ] and p ≥
2, the BDG inequality yields E [ | X t,xs | p ] ≤ C E n | x | p + (cid:16)Z st | b ( r, X t,xr ) | dr (cid:17) p + (cid:16)Z st | σ ( r, X t,xr ) | dr (cid:17) p + sup u ∈ [ t,s ] (cid:12)(cid:12)(cid:12)Z ut Z E | γ ( r, X t,xr − , e ) | e µ ( dr, de ) (cid:12)(cid:12)(cid:12) p o Since for each 1 ≤ i ≤ k , we have | γ i ( r, X t,xr − , e ) | ≤ K (1 + | X t,xr − | ) η ( e ) by Assumption 4.1 (ii)and (iii). By Lemma A.4 and the Lipschitz continuity yields, E [ | X t,xs | ] ≤ C (1 + | x | p ) + C Z st E [ | X t,xr | p ] dr s ∈ [ t,T ] E [ | X t,xs | p ] ≤ C (1 + | x | p ). Noticing the factthat X t,xs ≡ x for s ≤ t and applying BDG inequality once again, one obtains E h sup s ∈ [0 ,T ] | X t,xs | p i ≤ C (1 + | x | p ) . (b) Let us assume t ≤ s ≤ u ≤ s + h . The case with s < t can be done similarly by using X t,xs ≡ x for s ≤ t . Since X t,xu − X t,xs = Z us b ( r, X t,xr ) dr + Z us σ ( r, X t,xr ) dW r + Z us Z E γ ( r, X t,xr − ) e µ ( dr, de ) . Using the BDG inequality, Lemma A.4 and the result (a), one obtains E h sup u ∈ [ s,s + h ] | X t,xu − X t,xs | p i ≤ C (cid:16) E h sup r ∈ [ t,T ] | X t,xr | p | i(cid:17) h ≤ C (1 + | x | p ) h which gives the desired result.(c) Without loss of generality, we assume 0 ≤ t ′ ≤ t ≤ T . We separate the problem into thethree cases with respect to the range of s . Firstly, we clearly have E sup ≤ s ≤ t ′ | X t,xs − X t ′ ,x ′ s | p ≤ | x − x ′ | p . Secondly, let us consider E sup t ′ ≤ s ≤ t | X t,xs − X t ′ ,x ′ s | p = E sup t ′ ≤ s ≤ t | x − X t ′ ,x ′ s | p ≤ C E sup t ′ ≤ s ≤ t (cid:16) | x ′ − X t ′ ,x ′ s | p + | x − x ′ | p (cid:17) ≤ C (cid:0) | x − x ′ | p + (1 + | x ′ | p ) | t − t ′ | (cid:1) where, in the last inequality, we have used the result (b).Finally, we consider the case s ≥ t . Note that X t ′ ,x ′ s = X t ′ ,x ′ t + Z st b ( r, X t ′ ,x ′ r ) dr + Z st σ ( r, X t ′ ,x ′ r ) dW r + Z st Z E γ ( r, X t ′ ,x ′ r − , e ) e µ ( dr, de )and hence X t,xs − X t ′ ,x ′ s = x − x ′ − ( X t ′ ,x ′ t − x ′ )+ Z st [ b ( r, X t,xr ) − b ( r, X t ′ ,x ′ r )] dr + Z st [ σ ( r, X t,xr ) − σ ( r, X t ′ ,x ′ r )] dW r + Z st Z E [ γ ( r, X t,xr − , e ) − γ ( r, X t ′ ,x ′ r − , e )] e µ ( dr, de ) . E h sup s ∈ [ t,T ] | X t,xs − X t ′ ,x ′ s | p i ≤ C E n | x − x ′ | p + | X t ′ ,x ′ t − x ′ | p + (cid:16)Z Tt | b ( r, X t,xr ) − b ( r, X t ′ ,x ′ r ) | dr (cid:17) p + (cid:16)Z Tt | σ ( r, X t,xr ) − σ ( r, X t ′ ,x ′ r ) | dr (cid:17) p/ + Z Tt | X t,xr − X t ′ ,x ′ r | p dr o ≤ C ( | x − x ′ | p + (1 + | x ′ | p ) | t − t ′ | ) + C Z Tt E h sup s ∈ [ r,T ] | X t,xs − X t ′ ,x ′ s | p i dr where, in the last inequality, the result (b) was used.Using the backward Gronwall inequality, one obtains E h sup s ∈ [ t,T ] | X t,xs − X t ′ ,x ′ s | p i ≤ C ( | x − x ′ | p + (1 + | x ′ | p ) | t − t ′ | ) . Adding the above three cases and flipping the role of t, t ′ , one obtains in general E h sup s ∈ [0 ,T ] | X t,xs − X t ′ ,x ′ s | p i ≤ C (cid:16) | x − x ′ | p + (1 + ( | x | ∨ | x ′ | ) p ) | t − t ′ | (cid:17) . C Existence and uniqueness results for Lipschitz case
Anticipated BSDEs under the global Lipschitz condition have been studied by many authors.Our setup is a bit different from the standard one, in particular at the terminal conditionand also at the point where the continuity of the driver is defined with respect to the uniformnorm of the path rather than L [0 , T ]-norm. For readers’ convenience, we provide a proofunder our particular setup. It is restricted to the simplest form relevant for our purpose.One can readily generalize it to multi-dimensional setups with the future ( Z, ψ )-dependence(See [28] among others.).Let us consider the ABSDE for t ∈ [0 , T ] Y t = ξ + Z Tt E F r f (cid:0) r, ( Y v ) v ∈ [ r,T ] , Y r , Z r , ψ r (cid:1) dr − Z Tt Z r dW r − Z Tt Z E ψ r ( e ) e µ ( dr, de ) (C.1)where f : Ω × [0 , T ] × D [0 , T ] × R × R × d × L ( E, ν ) → R and ξ is an F T -measurable randomvariable. Assumption C.1. (i) The driver f is a map such that for every ( y, z, ψ ) ∈ R × R × d × L ( E, ν ) and any c`adl`ag F -adapted process ( Y v ) v ∈ [0 ,T ] , the process (cid:0) E F t f ( t, ( Y v ) v ∈ [ t,T ] , y, z, ψ ) , t ∈ [0 , T ] (cid:1) is progressively measurable.(ii) For every ( q, y, z, ψ ) , ( q ′ , y ′ , z ′ , ψ ′ ) ∈ D [0 , T ] × R × R × d × L ( E, ν ) , there exists somepositive constant K such that (cid:12)(cid:12) f (cid:0) t, ( q v ) v ∈ [ t,T ] , y, z, ψ (cid:1) − f (cid:0) t, ( q ′ v ) v ∈ [ t,T ] , y ′ , z ′ , ψ ′ (cid:1)(cid:12)(cid:12) ≤ K (cid:16) sup v ∈ [ t,T ] | q v − q ′ v | + | y − y ′ | + | z − z ′ | + || ψ − ψ ′ || L ( ν ) (cid:17) d P ⊗ dt -a.e. ( ω, t ) ∈ Ω × [0 , T ] .(iii) E h | ξ | + (cid:16)R T | f ( r, , , , | dr (cid:17) i < ∞ . roposition C.1. Under Assumption C.1, there exists a unique solution ( Y, Z, ψ ) ∈ S × H × J to the ABSDE (C.1).Proof. We prove the claim by constructing a strictly contracting map Φ : K [0 , T ] ∋ ( Y k , Z k , ψ k ) Φ( Y k , Z k , ψ k ) =: ( Y k +1 , Z k +1 , ψ k +1 ) ∈ K [0 , T ] defined by Y k +1 t = ξ + Z Tt E F r f (cid:0) r, ( Y kv ) v ∈ [ r,T ] , Y kr , Z kr , ψ kr (cid:1) dr − Z Tt Z k +1 r dW r − Z Tt Z E ψ k +1 r ( e ) e µ ( dr, de )with k ∈ N and ( Y , Z , ψ ) ≡ (0 , , δY k +1 := Y k +1 − Y k , δZ k +1 := Z k +1 − Z k , δψ k +1 := ψ k +1 − ψ k , Θ k := ( Y k , Z k , ψ k ) . We consider the norm || · || K β equivalent to || · || K defined with some β > || ( Y, Z, ψ ) || K β := E h sup r ∈ [0 ,T ] | e βr Y r | i + E Z T | e βr Z r | dr + E Z T || e βr ψ r || L ( ν ) dr . Applying Ito formula to e βt | δY k +1 t | , one obtains for any t ∈ [0 , T ] e βt | δY k +1 t | + Z Tt e βr | δZ k +1 r | dr + Z Tt Z E e βr | δψ k +1 r ( e ) | µ ( dr, de )= Z Tt e βr (cid:16) δY k +1 r E F r (cid:2) f ( r, ( Y kv ) v ∈ [ r,T ] , Θ kr ) − f ( r, ( Y k − v ) v ∈ [ r,T ] , Θ k − r ) (cid:3) − β | δY k +1 r | (cid:17) dr − Z Tt e βr δY k +1 r δZ k +1 r dW r − Z Tt Z E e βr δY k +1 r − δψ k +1 r ( e ) e µ ( dr, de ) . (C.2)For any ǫ >
0, one has2 δY k +1 r E F r (cid:2) f ( r, ( Y kv ) v ∈ [ r,T ] , Θ kr ) − f ( r, ( Y k − v ) v ∈ [ r,T ] , Θ k − r ) (cid:3) − β | δY k +1 r | ≤ K | δY k +1 r | (cid:16) E F r (cid:2) || δY k || [ r,T ] (cid:3) + | δZ kr | + || δψ kr || L ( ν ) (cid:17) − β | δY k +1 r | ≤ (cid:16) K ǫ − β (cid:17) | δY k +1 r | + ǫ (cid:16) E F r (cid:2) || δY k || r,T ] (cid:3) + | δZ kr | + || δψ kr || L ( ν ) (cid:17) . Thus, choosing β = β ( ǫ ) = 3 K /ǫ and taking expectation with t = 0 yields || e β · δZ k +1 || H + || e β · δψ k +1 || J ≤ ǫ (cid:16) T || e β · δY k || S + || e β · δ δZ k || H + || e β · δψ k || J (cid:17) . (C.3)Next, let us apply the BDG inequality (Theorem 48 in IV.4. of [33]) to (C.2). Then thereexists some constant C such that E h || e β · δY k +1 || ,T ] i ≤ ǫ (cid:16) T || e β · δY k || S + || e β · δ δZ k || H + || e β · δψ k || J (cid:17) + C E h(cid:16)Z T | e βr δY k +1 r | | e βr δZ k +1 r | dr (cid:17) i + C E h(cid:16)Z T Z E | e βr δY k +1 r − | | e βr δψ k +1 r ( e ) | µ ( dr, de ) (cid:17) i ≤ ǫ (cid:16) T || e β · δY k || S + || e β · δ δZ k || H + || e β · δψ k || J (cid:17) + 12 E h || e β · δY k +1 || ,T ] i + C (cid:16) || e β · δZ k +1 || H + || e β · δψ k +1 || J (cid:17) . C (which is independent of ǫ, β ), || e β · δY k +1 || S ≤ ǫ (cid:16) T || e β · δY k || S + || e β · δ δZ k || H + || e β · δψ k || J (cid:17) + C (cid:16) || e β · δZ k +1 || H + || e β · δψ k +1 || J (cid:17) . Combining with (C.3), one obtains (cid:12)(cid:12)(cid:12)(cid:12) ( δY k +1 , δZ k +1 , δψ k +1 ) (cid:12)(cid:12)(cid:12)(cid:12) K β ( ǫ ) ≤ ǫ ( C + 3)( T ∨ (cid:12)(cid:12)(cid:12)(cid:12) ( δY k , δZ k , δψ k ) (cid:12)(cid:12)(cid:12)(cid:12) K β ( ǫ ) and hence by choosing ǫ so that ǫ ( C + 3)( T ∨ < β ( ǫ ) accordingly) makes the mapΦ strict contraction with respect to the norm K β ( ǫ ) . This proves the existence as well as theuniqueness. D Comparison principle for non-anticipated settings
Consider the two BSDEs with i = { , } , Y it = ξ i + Z Tt e f i ( r, Y ir , Z ir , ψ ir ) dr − Z Tt Z ir dW r − Z Tt Z E ψ ir ( e ) e µ ( dr, de ) (D.1)for t ∈ [0 , T ]. Lemma D.1.
Suppose ( ξ, e f i ) ≤ i ≤ satisfy Assumptions 3.1, 3.2 and 4.1 of [16], which cor-respond to Assumptions 3.1, 3.2 and 6.1 of the current paper without the Y ’s future pathdependence, respectively. If ξ ≤ ξ a.s. and e f ( r, y, z, ψ ) ≤ e f ( r, y, z, ψ ) d P ⊗ dr -a.e. forevery ( y, z, ψ ) ∈ R × R × d × L ( E, ν ) , then Y t ≤ Y t ∀ t ∈ [0 , T ] a.s.Proof. One can prove it in the same way as Theorem 2.5 of [35]. By Theorem 4.1 [16],there exists a unique solution ( Y i , Z i , ψ i ) ≤ i ≤ ∈ S ∞ × H BMO × J BMO to the BSDEs (D.1)satisfying the universal bounds. Let us put δY := Y − Y , δZ := Z − Z , δψ := ψ − ψ , δ e f ( r ) := ( e f − e f )( r, Y r , Z r , ψ r ). We also introduce the two progressively measurableprocesses ( a r ) r ∈ [0 ,T ] , ( b r ) r ∈ [0 ,T ] given by a r := e f ( r, Y r , Z r , ψ r ) − e f ( r, Y r , Z r , ψ r ) δY r δY r =0 , b r := e f ( r, Y r , Z r , ψ r ) − e f ( r, Y r , Z r , ψ r ) | δZ r | δZ r =0 δZ ⊤ r . Note that a ∈ S ∞ and b ∈ H BMO due to the universal bounds and the local Lipschitzcontinuity. By Assumption 4.1 of [16], which is the A Γ -condition, there exists a P ⊗ E -measurable process Γ such that δY t ≤ δξ + Z Tt (cid:16) δ e f ( r ) + a r δY r + b r Z r + Z E Γ r ( e ) δψ r ( e ) ν ( de ) (cid:17) dr − Z Tt δZ r dW r − Z Tt Z E δψ r ( e ) e µ ( dr, de ) (D.2)satisfying C (1 ∧ | e | ) ≤ | Γ( e ) | ≤ C (1 ∧ | e | ) with some constant C > − C ≥
0. Here thefact that Y i ∈ S ∞ , ψ i ∈ J ∞ was used. Since M := R · b ⊤ r dW r + R · R E Γ r ( e ) e µ ( dr, de ) is a BMO-martingale with jump size strictly bigger than −
1, one can define an equivalent measure Q by d Q /d P = E T ( M ). Thus one obtains from (D.2) δY t ≤ E Q F t h e R t,T δξ + Z Tt e R t,r δ e f ( r ) dr i R t,s := R st a r dr . This proves the claim. Acknowledgement
The research is partially supported by Center for Advanced Research in Finance (CARF).
References [1] Antonelli, F., and Mancini, C., 2016,
Solutions of BSDE’s with jumps andquadratic/locally Lipschitz generator , Stochastic Processes and their Applications, 126,pp. 3124-3144.[2] Barles, G., Buckdahn, R. and Pardoux, E., 1997,
Backward stochastic differential equa-tions and integral-partial differential equations , Stochastics and Stochastics Reports, Vol.60, pp. 57-83.[3] Barrieu, P. and El Karoui, N., 2013,
Monotone stability of quadratic semimartingaleswith applications to unbounded general quadratic BSDEs , The Annals of Probability,Vol. 41, No. 3B, 1831-1863.[4] Becherer, D., 2006,
Bounded solutions to backward SDE’s with jumps for utility opti-mization and indifference pricing , The Annals of Applied Probability, Vol. 16, No. 4,2027-2054.[5] Bichteler, K., Gravereaux, J. and Jacod, J., 1987,
Malliavin calculus for processes withjumps , Stochastics Monographs, Gordon and Breach science publishers, LN.[6] Bismut, J.M., 1973,
Conjugate convex functions in optimal stochastic control , J. Math.Anal. Apl. 44, 384-404.[7] Cohen, S. and Elliott, R., 2015
Quadratic BSDEs in Stochastic Calculus and Applications(2nd edition) , Appendix A.9, 619-634. Birkh¨auser, Springer NY.[8] Cvitani´c, J. and Zhang, J., 2013,
Contract theory in continuous-time methods , Springer,Berlin.[9] Delong, L., 2013,
Backward Stochastic Differential Equations with Jumps and TheirActuarial and Financial Applications , Springer-Verlag, LN.[10] El Karoui, N., Matoussi, A. and Ngoupeyou, A., 2016,
Quadratic Exponential Semi-martingales and Application to BSDE’s with jumps , arXiv:1603.0691.[11] El Karoui, N., Peng, S. and Quenez, M.C., 1997,
Backward Stochastic Differential Equa-tions in Finance , Mathematical Finance, Vol. 7, No. 1, 1-71.[12] Epstein, L. and Zin, S., 1989,
Substitution, risk aversion and temporal behavior of con-sumption and asset returns: A theoretical framework , Econometrica, 57, 937-969.[13] Evans, L.C., 2010,
Partial Differential Equations (second edition) , Graduate Studies inMathematics, Vol. 19, American Mathematical Society.[14] Fromm, A. and Imkeller, P., 2013,
Existence, uniqueness and regularity of decouplingfields to multidimensional fully coupled FBSDEs , arXiv:1310.0499.3215] Fromm, A., 2014,
Theory and applications of decoupling fields for forward-backwardstochastic differential equations , Ph.D thesis, Humboldt-Universit¨at zu Berlin.[16] Fujii, M. and Takahashi, A., 2017,
Quadratic-exponential growth BSDEs with jumps andtheir Malliavin’s differentiability , Stochastic Processes and their Applications, In Press.https://doi.org/10.1016/j.spa.2017.09.002.[17] He, S., Wang, J. and Yang, J., 1992,
Semimartingale Theory and Stochastic Calculus ,Science Press and CRC Press, Beijing, China.[18] Jeanblanc, M., Lim, T. and Agram, N., 2016,
Some existence results for advanced back-ward stochastic differential equations with a jump time , HAL archives.[19] Kazamaki, N., 1979,
A sufficient condition for the uniform integrability of exponentialmartingales , Math. Rep. Toyama Univ. 2, 1-11. MR-0542374.[20] Kazamaki, N., 1994,
Continuous exponential martingales and BMO , Lecture Notes inMathematics, vol. 1579, Springer-Verlag, Berlin.[21] Kazi-Tani, N., Possamai, D. and Zhou, C.,
Quadratic BSDEs with jumps: a fixed-pointapproach , Electronic Journal of Probability, 20, No. 66, 1-28.[22] Klenke, A., 2014,
Probability Theory (2nd edition) , Springer, LN.[23] Kobylanski, M., 2000,
Backward stochastic differential equations and partial differentialequations with quadratic growth , The annals of probability, Vol. 28, No. 2, 558-602.[24] Liptser, R.Sh. and Shiryayev, A.N., 1989,
Theory of Martingales , Kluwer AcademicPublishers, Netherlands.[25] Morlais, M-A., 2010,
A new existence result for quadratic BSDEs with jumps with appli-cation to the utility maximization problem , Stochastic processes and their applications,120, 1966-1995.[26] Morlais, M-A, 2009,
Utility maximization in a jump market model , Stochastics, Vol. 81,No. 1, 1-27.[27] Ngoupeyou, A.B., 2010,
Optimisation des portefeuilles d’actifs soumis au risque ded´efaut , Ph.D. Thesis, Universit´e d’Evry.[28] Oksendal, B., Sulem, A. and Zhang, T., 2011,
Optimal control of stochastic delay equa-tions and time-advanced backward stochastic differential equations , Adv. Appl. Prob.,43, 572-596.[29] Pamen, O.M., 2015,
Optimal control for stochastic delay systems under model uncer-tainty: a stochastic differential game approach , J Optim Theory Appl, 167, 998-1031.[30] Pardoux, E. and Peng, S., 1990,
Adapted solution of a backward stochastic differentialequations , Systems Control Lett., 14, 55-61.[31] Pardoux, E. and Rascanu, A., 2014,
Stochastic differential equations, backward SDEs,partial differential equations , Springer International Publishing, Switzerland.[32] Peng, S. and Yang, Z., 2009,
Anticipated backward stochastic differential equations , TheAnnals of Probability, Vol. 37, No. 3, 877-902.3333] Protter, P., 2005,
Stochastic Integration and Differential Equations: 2nd edition, version2.1 , Springer, NY.[34] Royden, H.L. and Fitzpatrick, 2010,
Real Analysis (fourth edition) , Prentice Hall, U.S.[35] Royer, M., 2006,
Backward stochastic differential equations with jumps and related non-linear expectations , Stochastic process and their applications, 116, 1358-1376.[36] Yang, Z. and Elliott, R.J., 2013,
Some properties of generalized anticipated backwardstochastic differential equations , Electron. Commun. Probab. 18, No. 63, 1-10.[37] Xu, X.M., 2011,