aa r X i v : . [ m a t h . P R ] D ec Davie’s type uniqueness for a class of SDEswith jumps
Enrico Priola ∗ Dipartimento di Matematica “Giuseppe Peano”Universit`a di Torinovia Carlo Alberto 10, Torino, Italy
Abstract:
A result of A.M. Davie [Int. Math. Res. Not. 2007] states that amultidimensional stochastic equation dX t = b ( t, X t ) dt + dW t , X = x , driven by aWiener process W = ( W t ) with a coefficient b which is only bounded and measurablehas a unique solution for almost all choices of the driving Wiener path. We considera similar problem when W is replaced by a L´evy process L = ( L t ) and b is β -H¨oldercontinuous in the space variable, β ∈ (0 , L has a finite momentof order θ , for some θ >
0. Using also a new c`adl`ag regularity result for strongsolutions, we prove that strong existence and uniqueness for the SDE together with L p -Lipschitz continuity of the strong solution with respect to x imply a Davie’s typeuniqueness result for almost all choices of the L´evy path. We apply this result toa class of SDEs driven by non-degenerate α -stable L´evy processes, α ∈ (0 ,
2) and β > − α/ Keywords: stochastic differential equations - L´evy processes - path-by-path unique-ness - H¨older continuous drift.
Mathematics Subject Classification (2010):
In [8] A.M. Davie has proved that a SDE dX t = b ( t, X t ) dt + dW t , X = x ∈ R d ,driven by a Wiener process W and having a coefficient b which is only boundedand measurable has a unique solution for almost all choices of the driving Wienerpath. This type of uniqueness is also called path-by-path uniqueness . In other words,adding a single path of a Wiener process W = ( W t ) = ( W t ) t ≥ regularizes a singularODE whose right-hand side b is only bounded and measurable.We consider a similar uniqueness problem for SDEs driven by L´evy noises withH¨older continuous drift term b , i.e., we deal with X t ( ω ) = x + Z ts b ( r, X r ( ω )) dr + L t ( ω ) − L s ( ω ) , t ∈ [ s, T ] , (1.1)where T > , s ∈ [0 , T ], x ∈ R d , d ≥ b : [0 , T ] × R d → R d is measurable, boundedand β -H¨older continuous in the x -variable, uniformly in t , β ∈ (0 , ∗ E-mail: [email protected]. = ( L t ) is a d -dimensional L´evy process defined on a probability space (Ω , F , P )and ω ∈ Ω (see Section 2; recall that L = 0, P -a.s). Suppose that E [ | L | θ ] < ∞ forsome θ > x ∈ R d , s ∈ [0 , T ], strongexistence and uniqueness hold for (1.1) together with L p -Lipschitz continuity of thestrong solution ( X s,xt ) with respect to x , i.e.,sup s ∈ [0 ,T ] E (cid:2) sup s ≤ r ≤ T | X s,xr − X s,yr | p (cid:3) ≤ C | x − y | p , x, y ∈ R d , p ∈ [2 , ∞ ) (1.2)(cf. Hypothesis 1 and Section 2) we prove the following result (cf. Theorem 5.1) Theorem 1.1.
Assume Hypotheses 1 and 2. There exists an event Ω ′ ∈ F with P (Ω ′ ) = 1 such that for any ω ∈ Ω ′ , x ∈ R d , the integral equation f ( t ) = x + Z t b ( r, f ( r ) + L r ( ω )) dr, t ∈ [0 , T ] , (1.3) has exactly one solution f in C ([0 , T ]; R d ) . The assumptions and the uniqueness property are clear when β = 1 (the Lipschitzcase). When β ∈ (0 ,
1) the result is a special case of assertion (v) in Theorem 5.1which also considers s = 0. It turns out that f ( t ) = φ (0 , t, x, ω ) − L t ( ω ), t ∈ [0 , T ],where ( φ ( s, t, x, · )) is a particular strong solution to (1.1). In Section 6 we will applythe previous theorem to a class of SDEs driven by non-degenerate α -stable type L´evyprocesses, α ∈ (0 , β ∈ (cid:0) − α , (cid:1) . Note that we can alsotreat locally H¨older drifts b ( x ) by a localization procedure (see Corollaries 5.4 and5.5). These uniqueness results seem to be new even in dimension one. For instance,one can consider dX t = p | X t | dt + dL ( α ) t , X = x ∈ R , with a symmetric α -stable process L ( α ) = ( L ( α ) t ), α >
1, and prove that for almost all ω ∈ Ω there exists at most one solution for (1.3) with b ( r, x ) = p | x | and L = L ( α ) .As already mentioned when L = W is a standard Wiener process, Theorem 1.1is a special case of Theorem 1.1 in [8]. Recall that Davie’s uniqueness is strongerthen the usual pathwise uniqueness considered in the literature on SDEs (cf. Remark2.2 and see also [10]). Pathwise uniqueness deals with solutions which are adaptedstochastic processes and does not consider solutions corresponding to single paths( L t ( ω )) t ∈ [0 ,T ] . When L = W several results on strong existence and pathwise unique-ness are known for the SDE (1.1) with very irregular drift b : the seminal paper [35]deals with b as in the Davie’s result; further recent results consider b which is onlylocally in some L p -spaces (see also [13], [18] and [9]).When L is a stable type L´evy process, the SDE (1.1) with a H¨older continuousand bounded drift b and its associated integro-differential generator L b (cf. (6.8)) hasreceived a lot of attention (see, for instance, [34], [24], [31], [32], [3], [25], [6] and thereferences therein). On this respect in Theorem 3.2 of [34] the authors proved thatwhen d = 1 and L is a symmetric α -stable process, α ∈ (0 , β -H¨older continuous b if α + β < b possibly unbounded in time and suchthat b ( t, · ) is H¨older continuous. We mention that applications of Davie’s uniquenessto Euler approximations for (1.1) are given in Section 4 of [8].In our proof we use L p -estimates (1.2) which are well-known when L = W (theycan be easily deduced from Section 2 in [12]). They are even true for more generaldrifts b (i.e., b ∈ L q (0 , T ; L p ( R d ; R d )), d/p + 2 /q < p ≥ q >
2, see formula(5.9) and Proposition 5.2 in [9]). Moreover, when L is a symmetric non-degenerate α -stable process, b ( t, x ) = b ( x ), α ≥ β ∈ (1 − α , L p -estimates (1.2), passing through different modifications (see Sections3 and 4), we finally obtain a suitable strong solution φ ( s, t, x, ω ) (see Theorem 5.1)which solves (1.1) for any ω ∈ Ω ′ , for some almost sure event Ω ′ which is independenton s, t and x . Such solution φ is used to prove uniqueness of (1.3) (see the proofof (v) of Theorem 5.1). We also establish c`adl`ag regularity of φ with respect to s ,uniformly in t ∈ [0 , T ] and x , when x varies in compact sets of R d . This result seemsto be new even when d = 1 and b is Lipschitz continuous if L is not the Wienerprocess W (when L = W , the continuous dependence on s , uniformly in x , has beenproved in Section 2 of [14] for SDEs with Lipschitz coefficients). We also prove thecontinuous dependence of φ ( s, t, x, ω ) with respect to x and the flow property, forany ω ∈ Ω ′ (see assertions (iii) and (iv) in Theorem 5.1). There are recent papers onthe flow property for solutions to SDEs with jumps (see, for instance, [25], [21], [6]and the references therein). However they do not prove the previous assertions on φ .Remark that when L = W and b ( t, · ) is H¨older continuous as in (1.1), proving theexistence of a regular strong solution like φ is easier. Indeed in such case one can usethe well-known Kolmogorov-Chentsov continuity test to get a continuous dependenceon ( s, t, x ). More precisely, when L = W , we can apply the Zvonkin method of [35]or the related Itˆo-Tanaka trick of [12] and, using a suitable regular solution u ( t, x )of a related Kolmogorov equation (cf. Section 6.2), find that the process (cid:0) u ( t, X xt ) (cid:1) solves an auxiliary SDE with Lipschitz continuous coefficients. On this auxiliaryequation one can perform the Kolmogorov-Chentsov test as in [19] and finally obtainthe required regular modification of the strong solution. To get our regular strongsolution φ we do not pass through an auxiliary SDE but work directly on (1.1) usingfirst a result in [14] and then a c`adl`ag criterion given in [4]. We apply this criterion toa suitable stochastic process with values in a space of continuous functions definedon R d (see Theorem 4.4). This approach could be also useful to study regularityproperties of solutions to SDEs with multiplicative noise.In Section 6 we apply Theorem 5.1 to a class of SDEs driven by non-degenerate α -stable type L´evy processes, using also results in [24] and [25]. In particular we provea Davie’s type uniqueness result for (1.1) when L is a standard rotationally invariant α -stable process, α ∈ (0 ,
2) and β ∈ (1 − α , L is the well-knownfractional Laplacian − ( −△ ) α/ . To cover the case α ∈ (0 ,
1) we also need an analyticresult proved in [31] (cf. Remark 5.5 in [25]). When α ∈ [1 ,
2) and β ∈ (1 − α ,
1] wecan treat more general non-degenerate α -stable type processes like relativistic andtruncated stable processes and some temperated stable processes (cf. [25] with thereferences therein and see Examples 6.2). When α ∈ [1 ,
2) we can also consider thesingular α -stable process L = ( L t ), L t = ( L t , . . . , L dt ), t ≥
0, where L , . . . , L d are3ndependent one-dimensional symmetric α -stable processes; well-posedness of SDEsdriven by this process has recently received particular attention (see, for instance,[2], [24], [38], [25], [6]). We fix basic notations. We refer to [28], [20], [17] and [1] for more details on L´evyprocesses with values in R d . By h x, y i (or x · y ) we denote the euclidean inner productbetween x and y ∈ R d , for d ≥
1; further | x | = ( h x, x i ) / . If H ⊂ R d we denote by 1 H its indicator function. The Borel σ -algebra of a Borel set C ⊂ R k , k ≥
1, is indicatedby B ( C ). Similarly if ( S, d ) is a metric space we denote its Borel σ -algebra by B ( S ).We consider a complete probability space (Ω , F , P ). The expectation with respectto P is indicated with E. If G ⊂ F is a σ -algebra, a random variable X : Ω → S withvalues in a metric space ( S, d ) which is measurable from (Ω , G ) into ( S, B ( S )) is called G -measurable. Similarly a function l : [0 , T ] × Ω → S is B ([0 , T ]) × F -measurable if l is measurable with respect to the product σ -algebra B ([0 , T ]) × F .In the sequel we often need to specify the possible dependence of events of prob-ability one from some parameters. Recall that a set Ω ′ ⊂ Ω is an almost sure event ifΩ ′ ∈ F and P (Ω ′ ) = 1. To stress that Ω ′ possibly depends also on a parameter λ wewrite Ω ′ λ (the almost sure event Ω ′ λ may change from one proposition to another); forinstance the notation Ω s,x means that the almost sure event Ω s,x possibly dependsalso on s and x . We say that a property involving random variables holds on analmost sure event Ω ′ to indicate that such property holds for any ω ∈ Ω ′ (i.e., suchproperty holds P -a.s.).A d -dimensional stochastic process L = ( L t ) = ( L t ) t ≥ , d ≥
1, defined on(Ω , F , P ) is a L´evy process if it has independent and stationary increments, c`adl`agpaths (i.e., P -a.s., each mapping t L t ( ω ) is c`adl`ag from [0 , ∞ ) into R d ; we denoteby L s − ( ω ) the left-limit in s >
0) and L = 0, P -a.s..Similarly to Chapter II in [19] and Chapter V in [17] we define for 0 ≤ s < t < ∞ the σ -algebra F Ls,t as the completion of the σ -algebra generated by the random vari-ables L r − L s , r ∈ [ s, t ]. We also set F L ,t = F Lt . Since L has independent increments wehave that L v − L u is independent of F Lu for 0 ≤ u < v . Note that (Ω , F , ( F Lt ) t ≥ , P )is an example of stochastic basis which satisfies the usual assumptions (see [1, page72]). Given a L´evy process L there exists a unique function ψ : R d → C such that E [ e i h h,L t i ] = e − tψ ( h ) , h ∈ R d , t ≥ ψ is called the exponent of L . The L´evy-Khintchine formula for ψ states that ψ ( h ) = 12 h Qh, h i − i h a, h i − Z R d (cid:0) e i h h,y i − − i h h, y i {| y |≤ } ( y ) (cid:1) ν ( dy ) , (2.1) h ∈ R d , where Q is a symmetric non-negative definite d × d -matrix, a ∈ R d and ν is a σ -finite (Borel) measure on R d , such that R R d (1 ∧ | y | ) ν ( dy ) < ∞ , ν ( { } ) = 0(1 ∧| y | = min(1 , | y | )); ν is the L´evy measure (or intensity measure) of L. The triplet(
Q, ν, a ) uniquely identifies the law of L (see Proposition 9.8 in [28] or Corollary 2.4.21in [1]). It is called generating triplet (or characteristics) of the L´evy process L .Given two stochastic processes X = ( X t ) t ∈ [0 ,T ] and Y = ( Y t ) t ∈ [0 ,T ] defined on(Ω , F , P ) and with values in a metric space ( S, d ), we say that X is a modification r version of Y if for any t ∈ [0 , T ], X t = Y t , P -a.s.; if in addition both X and Y have c`adl`ag paths then, P ( X t = Y t , t ∈ [0 , T ]) = P ( X t = Y t , for any t ∈ [0 , T ]) = 1 . Let L = ( L t ) be a d -dimensional L´evy process defined on a complete probabilityspace (Ω , F , P ), let s ∈ [0 , T ] and x ∈ R d and consider the SDE dX t = b ( t, X t ) dt + dL t , s ≤ t ≤ T, X s = x, (2.2)with b : [0 , T ] × R d → R d which is a locally bounded Borel function.According to [19], [20] and [33] we say that an R d -valued stochastic process U s,x = ( U s,xt ) = ( U s,xt ) t ∈ [ s,T ] defined on (Ω , F , P ) is a strong solution to (2.2) startingfrom x at time s if, for any t ∈ [ s, T ], the random variable U s,xt : Ω → R d is F Ls,t -measurable; further we require that there exists an almost sure event Ω s,x (possiblydepending also on s and x but independent of t ) such that the following conditionshold for any ω ∈ Ω s,x : (i) the map: t U s,xt ( ω ) is c`adl`ag on [ s, T ]; (ii) we have U s,xt ( ω ) = x + Z ts b ( r, U s,xr ( ω )) dr + L t ( ω ) − L s ( ω ) , t ∈ [ s, T ]; (2.3)(iii) the path t L t ( ω ) is c`adl`ag and L ( ω ) = 0.Given a strong solution U s,x we set for any 0 ≤ t ≤ s , U s,xt = x on Ω.Let us recall some function spaces used in the paper. We consider C b ( R d ; R k ),for integers k, d ≥
1, as the Banach space of all continuous and bounded functions g : R d → R k endowed with the supremum norm k g k = k g k C b = sup x ∈ R d | g ( x ) | ,g ∈ C b ( R d ; R k ) . Moreover, C ,βb ( R d ; R k ), β ∈ (0 , β -H¨oldercontinuous functions g , i.e., g verifies[ g ] C ,βb = [ g ] β := sup x = x ′ ∈ R d ( | g ( x ) − g ( x ′ ) | | x − x ′ | − β ) < ∞ (when β = 1, g is Lipschitz continuous). If β = 0 we set C , b ( R d ; R k ) = C b ( R d ; R k ).If β ∈ (0 ,
1) we also write C βb ( R d ; R k ) = C ,βb ( R d ; R k ); note that C ,βb ( R d ; R k ) is aBanach space with the norm k · k C ,βb = k · k β = k · k + [ · ] β , β ∈ (0 , . If R k = R ,we set C ,βb ( R d ; R k ) = C ,βb ( R d ) (a similar convention is also used for other functionspaces). A function g ∈ C b ( R d ; R k ) belongs to C b ( R d ; R k ) if it is differentiable on R d and its Fr´echet derivative Dg ∈ C b ( R d ; R dk ). If β ∈ (0 , g ∈ C b ( R d ; R k )belongs to C βb ( R d ; R k ) if Dg ∈ C βb ( R d ; R dk ). The space C βb ( R d ; R k ) is a Banachspace endowed with the norm k g k β = k g k C βb = k g k +[ Dg ] β , g ∈ C βb ( R d ; R k ). C ∞ b ( R d ; R k ) is the space of all infinitely differentiable functions from R d into R k withall bounded derivatives. Finally g ∈ C ∞ b ( R d ) belongs to C ∞ ( R d ) if g has compactsupport. Given a bounded open set B ⊂ R d we can define similar Banach spaces C β ( B ) and C β ( B ) with norms k · k C β ( B ) and k · k C β ( B ) , β ∈ (0 , b belongs to L ∞ (0 , T ; C ,βb ( R d ; R d )), β ∈ [0 , b : [0 , T ] × R d → R d is Borel measurable and bounded, b ( t, · ) ∈ C ,βb ( R d ; R d ), t ∈ [0 , T ], and [ b ] β,T = sup t ∈ [0 ,T ] [ b ( t, · )] C ,βb < ∞ . Set k b k β,T = [ b ] β,T + k b k , k b k = sup t ∈ [0 ,T ] ,x ∈ R d | b ( t, x ) | if β ∈ (0 ,
1] and k b k ,T = k b k , β = 0. Note that ( L ∞ (0 , T ; C ,βb ( R d ; R d ))) , k · k β,T ) is a Banach space. We willalso use G = C ([0 , T ]; R d ) (2.4)5o denote the separable Banach space consisting of all continuous functions f :[0 , T ] → R d , endowed with the usual supremum norm k · k G .Let us formulate our assumptions on (1.1) when b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )), β ∈ [0 , b ( t, x ) with b ( t, x ) + a , to study the SDE(1.1) we may always assume that in the generating triplet ( Q, ν, a ) we have a = 0 . (2.5)In (1.1) we deal with a L´evy process L defined on (Ω , F , P ) and b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) which satisfy Hypothesis . (i) For any s ∈ [0 , T ] and x ∈ R d on (Ω , F , P ) there exists a strongsolution ( U s,xt ) t ∈ [0 ,T ] to (2.2).(ii) Let s ∈ [0 , T ]. Given any two strong solutions ( U s,xt ) t ∈ [0 ,T ] and ( U s,yt ) t ∈ [0 ,T ] defined on (Ω , F , P ) which both solve (2.2) with respect to L and b (starting from x and y ∈ R d , respectively, at time s ) we have, for any p ≥ s ∈ [0 ,T ] E (cid:2) sup s ≤ t ≤ T | U s,xt − U s,yt | p (cid:3) ≤ C ( T ) | x − y | p , x, y ∈ R d , (2.6)with C ( T ) = C (cid:0) ( ν, Q, k b k β,T , d, β, p, T (cid:1) > s , x and y .The previous hypothesis holds clearly for any L´evy process L if β = 1 (the Lipschitzcase). Next we consider the L´evy measure ν associated to the large jump parts of L . Hypothesis . There exists θ > R {| x | > } | x | θ ν ( dx ) < ∞ . Remark . By Theorems 25.3 and 25.18 in [28] the following three conditions areequivalent:(a) R {| x | > } | x | θ ν ( dx ) < ∞ for some θ > E [ | L t | θ ] < ∞ for some t > E [sup s ∈ [0 ,t ] | L s | θ ] < ∞ for any t > R {| x | > } | x | θ ν ( dx ) < ∞ holds for some θ > R {| x | > } | x | θ ′ ν ( dx ) < ∞ for any θ ′ ∈ (0 , θ ]. Remark . We present here for the sake of completeness some general conceptsabout solutions of SDEs (cf. [31] for more details). We will not use these notionsin the sequel. Let the initial time s = 0. A weak solution to (1.1) with initialcondition x ∈ R d is a tuple (Ω , F , ( F t ) t ≥ , P, L, X ), where (Ω , F , ( F t ) t ≥ , P ) is astochastic basis on which it is defined a L´evy process L and a c`adl`ag ( F t )-adapted R d -valued process X = ( X t ) which solves (1.1) P -a.s.. A weak solution X which is( F Lt )-adapted is called strong solution. One say that pathwise uniqueness holds for(1.1) if given two weak solutions X and Y (starting from x ∈ R d ) and defined on thesame stochastic basis (with respect to the same L ) then P -a.s. we have X t = Y t , forany t ∈ [0 , T ]. 6 Preliminary results on strong solutions
Consider (2.2) with b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )), β ∈ [0 , L definedon (Ω , F , P ) and b satisfy Hypothesis 1.Let s ∈ [0 , T ], x ∈ R d . We start with a strong solution ( ˜ X s,xt ) t ∈ [0 ,T ] to (2.2)defined on (Ω , F , P ) and introduce the d -dimensional process ˜ Y s,x = ( ˜ Y s,xt ) t ∈ [0 ,T ] ,˜ Y s,xt = ˜ X s,xt − ( L t − L s ) , t ≥ s. (3.1)Note that on some almost sure event Ω s,x (independent of t ) we have˜ Y s,xt = x + Z ts b ( r, ˜ Y s,xr + ( L r − L s )) dr, t ≥ s, (3.2)and ˜ Y s,xt = x on Ω if t ≤ s . It follows that ( ˜ Y s,xt ) t ∈ [0 ,T ] have continuous paths .Let us fix s ∈ [0 , T ] and x ∈ R d . We modify the process ˜ Y s,x only on Ω \ Ω s,x by setting ˜ Y s,xt ( ω ) = x , for t ∈ [0 , T ], if ω Ω s,x (we still denote by ˜ Y s,x such newprocess).We find that ˜ Y s,x · ( ω ) ∈ G = C ([0 , T ]; R d ), for any ω ∈ Ω. Moreover (cf. (2.4))it is easy to check that˜ Y s,x = ˜ Y s,x · is a random variable with values in G . (3.3)Now, for each fixed s ∈ [0 , T ], we will construct a suitable modification of the randomfield ( ˜ Y s,x ) x ∈ R d with values in G . We need the following special case of Theorem1.1 of [14]. It is a generalized Garsia-Rodemich-Rumsey type lemma. Theorem 3.1. ([14]) Let ( M, ρ ) be a separable metric space and (Ω , F , P ) be aprobability space. Let ψ : Ω × R d → M be a F × B ( R d ) -measurable map such that ψ ( ω, · ) is continuous on R d , for each ω ∈ Ω , and there exists c > and p > d forwhich E [( ρ ( ψ ( · , x ) , ψ ( · , y )) p ] ≤ c | x − y | p , x, y ∈ R d . Then, for any ω ∈ Ω , x, y ∈ R d , ρ ( ψ ( ω, x ) , ψ ( ω, y )) ≤ Y ( ω ) | x − y | − dp [( | x | ∨ | y | ) d +1 p ∨ , (3.4) where Y : Ω → [0 , ∞ ] is the following p -integrable random variable: Y ( ω ) = (cid:16) Z R d Z R d ( ρ ( ψ ( ω, x ) , ψ ( ω, y )) p | x − y | p f ( x ) f ( y ) dxdy (cid:17) /p , ω ∈ Ω , with f ( x ) = c ( d, p ) (cid:0) [ | x | d [(log( | x | ) ∨ ] ∨ (cid:1) − , x = 0 , for some constant c ( d, p ) > . In Theorem 1.1 of [14] f ( x ) is just defined as (cid:0) [ | x | d [(log( | x | ) ∨ ] ∨ (cid:1) − . More-over Y ( ω ) = c ( R R d R R d ( ρ ( ψ ( ω,x ) ,ψ ( ω,y )) p | x − y | p f ( x ) f ( y ) dxdy ) /p . Lemma 3.2.
Consider (2.2) with b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) , β ∈ [0 , , and supposethat L defined on (Ω , F , P ) and b satisfy Hypothesis 1. Let us fix s ∈ [0 , T ] andconsider the random field ˜ Y s = ( ˜ Y s,x ) x ∈ R d with values in G (see (3.3)) . We have:(i) There exists a continuous version Y s = ( Y s,x ) x ∈ R d with values in G (i.e., forany x ∈ R d , Y s,x = ˜ Y s,x in G on some almost sure event). ii) For any p > d there exists a random variable U s,p with values in [0 , ∞ ] suchthat, for any ω ∈ Ω , x, y ∈ R d , k Y s,x ( ω ) − Y s,y ( ω ) k G ≤ U s,p ( ω ) [( | x | ∨ | y | ) d +1 p ∨ | x − y | − d/p . (3.5) Moreover, with the same constant C ( T ) appearing in (2.6) , sup s ∈ [0 ,T ] E [ U ps,p ] ≤ C ( d ) C ( T ) < ∞ (3.6) where C ( d ) = (cid:0) R R d f ( x ) dx (cid:1) (hence U s,p is finite on some almost sure event possiblydepending on s and p ).(iii) On some almost sure event Ω ′ s (independent of t and x ) we have Y s,xt = x + Z ts b ( r, Y s,xr + ( L r − L s )) dr, t ≥ s, x ∈ R d (3.7) (where Y s,xt ( ω ) = ( Y s,x · ( ω ))( t ) , t ∈ [0 , T ]) .Proof. (i) Using (2.6) we can apply the Kolmogorov-Chentsov continuity test as in[15], page 57, and obtain a continuous version Y s of ˜ Y s . The classical proof given in[15] uses the Borel-Cantelli lemma; by such proof it is easy to show that an analogousof (2.6) holds for Y s , i.e., for p ≥ , x, y ∈ R d , sup s ∈ [0 ,T ] E [ k Y s,x − Y s,y k pG ] = sup s ∈ [0 ,T ] E [ k ˜ Y s,x − ˜ Y s,y k pG ] ≤ C ( T ) | x − y | p . (3.8) (ii) As in Theorem 3.1 we consider the random variables U s,p ( ω ) = (cid:16) Z R d Z R d (cid:16) k Y s,x ( ω ) − Y s,y ( ω ) k G | x − y | (cid:17) p f ( x ) f ( y ) dxdy (cid:17) /p ,ω ∈ Ω , p > d and s ∈ [0 , T ]. By (3.8) and Theorem 3.1 we obtain (3.5) and (3.6). (iii) We start from equation (3.2) involving the process ( ˜ Y s,x ). Since for some almostsure event Ω ′ s,x ⊂ Ω s,x , we have Y s,xt ( ω ) = ˜ Y s,xt ( ω ), ω ∈ Ω ′ s,x , t ∈ [0 , T ], we obtainfrom (3.2) Y s,xt ( ω ) = x + Z ts b ( r, Y s,xr ( ω ) + ( L r ( ω ) − L s ( ω ))) dr, for any s ∈ [ t, T ], x ∈ Q d , ω ∈ Ω ′ s = T x ∈ Q d Ω ′ s,x . Note also that by (i) the function: x Y s,x ( ω ) is continuous for all ω ∈ Ω. Take now x ∈ R d and let ( x n ) ⊂ Q d be asequence converging to x . It follows from the continuity of b ( r, · ) and the dominatedconvergence theorem that, for any t ≥ s , on Ω ′ s we have: Y s,xt = lim n →∞ Y s,x n t = lim n →∞ x n + lim n →∞ Z ts b ( r, Y s,x n r + ( L r − L s )) dr = x + Z ts b ( r, Y s,xr + ( L r − L s )) dr and this shows the assertion. 8et s ∈ [0 , T ]. According to the previous result starting from Y s = ( Y s,x ) x ∈ R d we can define random variables X s,xt : Ω → R d as follows: X s,xt = x if t ≤ s and X s,xt = Y s,xt + ( L t − L s ) , s, t ∈ [0 , T ] , x ∈ R d , s ≤ t. (3.9)By the properties of Y s,x we get P ( ˜ X s,xt = X s,xt , t ∈ [0 , T ]) = 1, for any x ∈ R d (cf. (3.1)). Moreover, using also (3.7), we find that for some almost sure event Ω ′ s (independent of x and t ) the map: t X s,xt ( ω ) is c`adl`ag on [0 , T ], for any ω ∈ Ω ′ s , x ∈ R d , and on Ω ′ s we have X s,xt = x + Z ts b ( r, X s,xr ) dr + L t − L s , s ≤ t ≤ T, x ∈ R d . (3.10)Thus ( X s,xt ) t ∈ [0 ,T ] is a particular strong solution to (2.2). By Lemma 3.2 we alsohave, for any s ∈ [0 , T ], x ∈ R d , on Ωlim y → x sup t ∈ [0 ,T ] | X s,xt − X s,yt | = 0 . (3.11)We can prove the following flow property. Lemma 3.3.
Under the same assumptions of Lemma 3.2 consider the strong solution ( X s,xt ) t ∈ [0 ,T ] defined in (3.9) . Let ≤ s < u ≤ T . There exists an almost sure event Ω s,u (independent of t ∈ [ u, T ] and x ∈ R d ) such that for ω ∈ Ω s,u , x ∈ R d , we have X s,xt ( ω ) = X u, X s,xu ( ω ) t ( ω ) , t ∈ [ u, T ] , x ∈ R d . (3.12) Proof.
Let us fix s, u ∈ [0 , T ], s < u , and x ∈ R d . We introduce the process ( V xt ) ≤ t ≤ T on (Ω , F , P ) with values in R d : V xt ( ω ) = ( X s,xt ( ω ) for 0 ≤ t ≤ u,X u, X s,xu ( ω ) t ( ω ) for u < t ≤ T , ω ∈ Ω . In order to prove (3.12) we will show that ( V xt ) is strong solution to (2.2) for t ≥ s .Then by uniqueness we will get the assertion.It is easy to prove that ( V xt ) has c`adl`ag paths. More precisely, by (3.7) on somealmost sure event Ω ′ s ∩ Ω ′ u (independent of x ) we have that t V xt ( ω ) is c`adl`ag on[0 , T ] (note also that, for any ω ∈ Ω ′ s ∩ Ω ′ u , z ∈ R d , lim t → u + X u,zt ( ω ) = z ).Moreover, for any x ∈ R d and t ≥ s , the random variable V xt is F Ls,t -measurable.The assertion is clear if t ≤ u . Let us consider the case when t > u . First X s,xu is F Ls,t -measurable. Define F t,u ( z, ω ) = X u, zt ( ω ), z ∈ R d , ω ∈ Ω. The mapping F t,u is clearly B ( R d ) × F Ls,t -measurable on R d × Ω and F t,u ( · , ω ) is continuous on R d , for any ω ∈ Ω,by (3.11). It follows that also the map: ω F t,u ( X s,xu ( ω ) , ω ) is F Ls,t -measurable.It is clear that ( V xt ) solves (3.10) on Ω ′ s when s ≤ t ≤ u (recall (3.7)). Let usconsider the case when t ≥ u . According to (3.10) we know that on Ω ′ u we have X u,X s,xu t = X s,xu + Z tu b ( r, X u,X s,xu r ) dr + L t − L u , t ≥ u. (3.13)9ence on Ω ′ u ∩ Ω ′ s we have for t ≥ uV xt = X u,X s,xu t = x + Z us b ( r, X s,xr ) dr + L u − L s + Z tu b ( r, X u,X s,xu r ) dr + L t − L u = x + Z ts b ( r, V xr ) dr + L t − L s . It follows that ( V xt ) solves (3.10) on Ω ′ s ∩ Ω ′ u when s ≤ t ≤ T . By Hypothesis 1 weinfer that, for any x ∈ R d , on some almost sure event Ω s,u,x we have that V xt = X s,xt , t ∈ [ s, T ]. In particular we get V xt = X s,xt , t ∈ [ u, T ] and this proves (3.12) at leaston an almost sure event Ω s,u,x .To remove the dependence on x in the almost sure event, we note that the map-ping: x V xt ( ω ) is continuous from R d into R d , for any ω ∈ Ω, t ∈ [0 , T ] (see(3.11)). Arguing as in the final part of the proof of Lemma 3.2 we obtain that X s,xt ( ω ) = V xt ( ω ) , for t ∈ [ u, T ] , x ∈ R d and ω ∈ Ω s,u = T x ∈ Q d Ω s,u,x . This proves(3.12).Following [26] page 169 (see also Problem 48 in [26]) we introduce the space C ( R d ; G ) consisting of all continuous functions from R d into G = C ([0 , T ]; R d )endowed with the compact-open topology (or the topology of the uniform convergenceon compact sets). This is a complete metric space endowed with the following metric: d ( f, g ) = X N ≥ N sup | x |≤ N k f ( x ) − g ( x ) k G | x |≤ N k f ( x ) − g ( x ) k G , f, g ∈ C ( R d ; G ) . (3.14)It is well-know that C ( R d ; G ) is also separable (see, for instance, [16]; on the otherhand C b ( R d ; G ) is not separable). We will also consider the following projections π x : C ( R d ; G ) → G , π x ( f ) = f ( x ) ∈ G , x ∈ R d , f ∈ C ( R d ; G ) (3.15)(each π x is a continuous map). According to Lemma 3.2 for any s ∈ [0 , T ] therandom field ( Y s,x ) x ∈ R d has continuous paths. It is not difficult to prove that, forany s ∈ [0 , T ], the mapping: ω Y s ( ω ) = Y s, · ( ω ) (3.16)is measurable from (Ω , F , P ) with values in C ( R d ; G ). Indeed thanks to the sepa-rability of C ( R d ; G ) to check the measurability it is enough to prove that counter-images of balls B r ( f ) = { f ∈ C ( R d ; G ) : P N ≥ N sup {| x |≤ N,x ∈ Q d } k f ( x ) − f ( x ) k G {| x |≤ N,x ∈ Q d } k f ( x ) − f ( x ) k G
Let X = ( X t ) t ∈ [0 ,T ] be a stochastically continuous process with valuesin a complete metric space ( S , d ) . A sufficient condition in order that X has a c`adl`agmodification is the following one: there exists q > / and r > such that, for any ≤ s < t < u ≤ T , we have E (cid:2) d ( X s , X t ) q · d ( X t , X u ) q ] ≤ C | u − s | r . (4.3) Proof.
In order to apply Theorem 4.1 we introduce x ( h ) = q − q h − / q , h ∈ (0 , T ].Let us fix 0 ≤ s < t < u and M >
0. Noting that for a, b ≥ a ∧ b ≤ √ a √ b .We find by the H¨older inequality E [ △ ( s, t, u )1 △ ( s,t,u ) ≥ M ] ≤ ( E [ △ ( s, t, u ) q ]) / q ( P ( △ ( s, t, u ) ≥ M )) q − q ≤ E (cid:2) d ( X s , X t ) q · d ( X t , X u ) q (cid:3)(cid:1) / q Z P ( △ ( s,t,u ) ≥ M )0 x ( r ) dr ≤ C / q | u − s | (1+ r ) / q Z P ( △ ( s,t,u ) ≥ M )0 x ( r ) dr. Setting δ ( h ) = h (1+ r ) / q , h ∈ [0 , T ], we see that R T δ ( u ) u − − q du < ∞ is equivalentto (4.2); we get the assertion.We now prove the stochastic continuity of Y .11 emma 4.3. Consider (2.2) with b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) , β ∈ (0 , , and supposethat L and b satisfy Hypotheses 1 and 2. Then the process Y = ( Y s ) with values in C ( R d ; G ) (see (3.16)) is continuous in probability.Proof. Let us fix s ∈ [0 , T ]. We have to prove thatlim s ′ → s P (cid:16) sup | x |≤ N sup t ∈ [0 ,T ] | Y s,xt − Y s ′ ,xt | > r (cid:17) = 0 , for any r > N ≥
1. (4.4)Indeed this is equivalent to lim s ′ → s P (cid:0) d ( Y s , Y s ′ ) > r (cid:1) = 0 , r >
0. To this purposeit is enough to check both the left and the right continuity in (4.4). Let us check theright continuity in s (assuming s ∈ [0 , T )). The proof of the left-continuity in s canbe done in a similar way. Since C ,βb ( R d ; R d ) ⊂ C ,β ′ b ( R d ; R d ) for 0 < β ′ ≤ β ≤ β is sufficiently small; we will assume (cf. Hypothesis 2) β (2 d + 1) < dθ. (4.5)Let ( s n ) ⊂ ] s, T ] with s n → s . We have to prove that for fixed N ≥ δ > n →∞ P (cid:16) sup | x |≤ N sup t ∈ [0 ,T ] | Y s,xt − Y s n ,xt | > δ (cid:17) = 0 . (4.6)If we show that E h sup ≤ t ≤ T sup | x |≤ N | Y s,xt − Y s n ,xt | i → n → ∞ . (4.7)then (4.6) follows. Let us fix n ≥ J t,x,n,s = | Y s,xt − Y s n ,xt | . If t ≤ s we find J t,x,n,s = 0. If s ≤ t ≤ s n then, for any x ∈ R d , onsome almost sure event Ω s,s n (independent of x and t ; see (3.7)) J t,x,n,s = (cid:12)(cid:12)(cid:12) Z ts b ( r, Y s n ,xr + ( L r − L s )) dr (cid:12)(cid:12)(cid:12) ≤ k b k | t − s | ≤ k b k | s − s n | . Hence in order to get (4.7) we need to prove that E h sup s n ≤ t ≤ T sup | x |≤ N | Y s,xt − Y s n ,xt | i → n → ∞ . (4.8)Let t ≥ s n . We have on Ω s,s n sup | x |≤ N | Y s,xt − Y s n ,xt | ≤ sup | x |≤ N (cid:12)(cid:12)(cid:12) Z ts b ( r, X s,xr ) dr − Z ts n b ( r, X s n ,xr ) dr (cid:12)(cid:12)(cid:12) (4.9) ≤ | s − s n | k b k + sup | x |≤ N Z ts n | b ( r, X s,xr ) − b ( r, X s n ,xr ) | dr. By Lemma 3.3 on some almost sure event Ω ′ s,s n ⊂ Ω s,s n (independent of x and r ) wehave for r ∈ [ s n , T ]sup | x |≤ N | b ( r, X s,xr ) − b ( r, X s n ,xr ) | = sup | x |≤ N | b ( r, X s n ,X s,xsn r ) − b ( r, X s n ,xr ) |≤ [ b ] β,T sup | x |≤ N sup r ∈ [0 ,T ] | X s n ,X s,xsn r − X s n ,xr | β = [ b ] β,T sup | x |≤ N k Y s n ,X s,xsn − Y s n ,x k βG .
12y Lemma 3.2 with p = 4 d , setting U s ′ = U s ′ ,p , s ′ ∈ [0 , T ], we getsup | x |≤ N | b ( r, X s,xr ) − b ( r, X s n ,xr ) | (4.10) ≤ [ b ] β,T [( | x | ∨ | X s,xs n | ) d +14 d ∨ β U βs n sup | x |≤ N | x − X s,xs n | β/ . Noting that, for | x | ≤ N , n ≥ | X s,xs n | ≤ N + 2 T k b k + | L s n − L s | we obtain on Ω ′ s,s n sup | x |≤ N | b ( r, X s,xr ) − b ( r, X s n ,xr ) | ≤ [ b ] β,T V βs,s n ,N sup | x |≤ N | x − X s,xs n | β/ , (4.11) r ∈ [ s n , T ], where we have introduced the random variables V s,s ′ ,N = h N d +14 d + (2 T k b k ) d +14 d + | L s ′ − L s | d +14 d i U s ′ , (4.12)0 ≤ s < s ′ ≤ T . By Remark 2.1 and (4.5) we know that, for any n ≥ E [ | L s n − L s | β (2 d +1)2 d ] = E [ | L s n − s | β (2 d +1)2 d ] ≤ E [ sup s ∈ [0 ,T ] | L s | β (2 d +1)2 d ] < ∞ , since E [sup r ∈ [0 ,T ] | L r | θ ] < ∞ . Using also that sup r ∈ [0 ,T ] E [ U βr,p ] = k ′ < ∞ (see (3.6))we obtain by the Cauchy-Schwarz inequalitysup ≤ s δ (cid:1) = 0 , δ > . (4.16)Using (4.9) on an almost sure event Ω ′ s,s n , for any δ >
0, we havesup s n ≤ t ≤ T sup | x |≤ N | Y s,xt − Y s n ,xt |≤ | s − s n | k b k + (cid:0) { Z n ≤ δ } + 1 { Z n >δ } (cid:1) · sup | x |≤ N Z ts n | b ( r, X s,xr ) − b ( r, X s n ,xr ) | dr ≤ T { Z n ≤ δ } [ b ] β,T V βs,s n ,N δ β/ + 2 T k b k { Z n >δ } + 2 | s − s n | k b k . E [ sup s n ≤ t ≤ T sup | x |≤ N | Y s,xt − Y s n ,xt | ] ≤ | s − s n | k b k + k T [ b ] β,T δ β/ + 2 T k b k P ( Z n > δ ) . Now, using (4.16), we obtain easily (4.8) and this completes the proof.In the next result we need the L´evy-Itˆo formula. To this purpose we recall thedefinition of Poisson random measure N : N ((0 , t ] × H ) = P . The L´evy-Itˆo decomposition of the given L´evy process L on (Ω , F , P ) with generatingtriplet ( ν, Q,
0) (see Section 19 in [28] or Theorem 2.4.16 in [1]) asserts that thereexists a Q -Wiener process B = ( B t ) on (Ω , F , P ) independent of N with covariancematrix Q (cf. (2.1)) such that on some almost sure event Ω ′ we have L t = A t + B t + C t , t ≥ , where (4.17) A t = Z t Z {| x |≤ } x ˜ N ( ds, dx ) , C t = Z t Z {| x | > } xN ( ds, dx ); (4.18)˜ N is the compensated Poisson measure (i.e., ˜ N ( dt, dx ) = N ( dt, dx ) − dtν ( dx )). Theorem 4.4.
Under the same assumptions of Lemma 4.3 consider the process Y = ( Y s ) with values in C ( R d ; G ) (see (3.16)). There exists a modification Z = ( Z s ) of Y with c`adl`ag paths.Proof. To prove the assertion we will apply Corollary 4.2. We already know byLemma 4.3 that Y is continuous in probability.In the proof we will use the fact that R {| x | > } | x | θ ν ( dx ) < ∞ for some θ ∈ (0 , Step I.
We establish simple moment estimates for the L´evy process L , using theIto-L´evy decomposition (4.18).Using basic properties of the martingales ( A t ) and ( B t ) we obtain E | B t | = C Q t, E | A t | = t Z {| x |≤ } | x | ν ( dx ) , t ≥ C = ( C t ); on Ω we have | C t | θ = (cid:12)(cid:12)(cid:12) X } (cid:12)(cid:12)(cid:12) θ ≤ X } , since the random sum is finite for any ω ∈ Ω and θ ≤
1. Let f ( x ) = 1 {| x | > } ( x ) | x | θ , x ∈ R d ; using a well-know result (cf. pages 145 and 150 in [17] or Section 2.3.2 in[1]) we get E (cid:2) P } (cid:3) = E (cid:2) R t R {| x | > } | x | θ N ( ds, dx ) (cid:3) = R R d f ( x ) ν ( dx ) = R {| x | > } | x | θ ν ( dx )14nd so E | C t | θ ≤ t Z {| x | > } | x | θ ν ( dx ) = c t, t ≥ . (4.20) Step II.
Let 0 ≤ s < s ′ ≤ T . Similarly to the proof of Lemma 4.3 in this step weestablish estimates for the random variable J t,x,s,s ′ = | Y s,xt − Y s ′ ,xt | .If t ≤ s we have J t,x,s,s ′ = 0, x ∈ R d . If s ≤ t ≤ s ′ then, for any x ∈ R d , on somealmost sure event Ω s,s ′ (independent of t and x ) we find | Y s,xt − Y s ′ ,xt | ≤ k b k | t − s | ≤ k b k | s − s ′ | . Let t ≥ s ′ and N ≥
1. We have (cf. (4.9))sup | x |≤ N | Y s,xt − Y s ′ ,xt | ≤ | s − s ′ | k b k + sup | x |≤ N Z ts ′ | b ( r, X s,xr ) − b ( r, X s ′ ,xr ) | dr. (4.21)Moreover, there exists an almost sure event Ω ′ s,s ′ ⊂ Ω s,s ′ such that on Ω ′ s,s ′ we havefor r ∈ [ s ′ , T ]sup | x |≤ N | b ( r, X s,xr ) − b ( r, X s ′ ,xr ) | = sup | x |≤ N | b ( r, X s ′ ,X s,xs ′ r ) − b ( r, X s ′ ,xr ) |≤ [ b ] β,T sup | x |≤ N k Y s ′ ,X s,xs ′ − Y s ′ ,x k βG . Now we use Lemma 3.2 with p ≥ d to be fixed and get, for any r ∈ [ s ′ , T ] on Ω ′ s,s ′ (cf. (4.10) and (4.11)) sup | x |≤ N | b ( r, X s,xr ) − b ( r, X s ′ ,xr ) |≤ [ b ] β,T [( | x | ∨ | X s,xs ′ | ) d +1 p ∨ β U βs ′ ,p sup | x |≤ N | x − X s,xs ′ | β (1 − dp ) and sosup | x |≤ N | b ( r, X s,xr ) − b ( r, X s ′ ,xr ) | ≤ [ b ] β,T V βs,s ′ ,N,p sup | x |≤ N | x − X s,xs ′ | β (1 − dp ) , (4.22) V s,s ′ ,N,p = h N d +1 p + (2 T k b k ) d +1 p + | L s ′ − L s | d +1 p i U s ′ ,p , Coming back to (4.21) we find for t ≥ s ′ on Ω ′ s,s ′ sup s ′ ≤ t ≤ T sup | x |≤ N | Y s,xt − Y s ′ ,xt | ≤ | s − s ′ | k b k + T (cid:8) sup | x |≤ N | X s,xs ′ − x |≤ c | s − s ′ | / (cid:9) [ b ] β,T V βs,s ′ ,N,p sup | x |≤ N | x − X s,xs ′ | β (1 − dp ) . + 2 T k b k (cid:8) sup | x |≤ N | X s,xs ′ − x | >c | s − s ′ | / (cid:9) , with c > c ρ / − k b k ρ ≥ ρ / , for any ρ ∈ [0 , T ] . We obtain on Ω ′ s,s ′ sup | x |≤ N sup t ∈ [0 ,T ] | Y s,xt − Y s ′ ,xt | = sup | x |≤ N k Y s,x − Y s ′ ,x k G (4.23) ≤ C | s − s ′ | + C V βs,s ′ ,N,p | s − s ′ | β (1 − dp ) + C (cid:8) sup | x |≤ N | X s,xs ′ − x | >c | s − s ′ | / (cid:9) , C = 2( T ∨ k b k β,T c β . Since, for any x ∈ R d , | X s,xs ′ − x | ≤ | s ′ − s |k b k + | L s ′ − L s | and, moreover, c | s − s ′ | / − k b k | s − s ′ | ≥ | s − s ′ | / , we find on Ω ′ s,s ′ sup | x |≤ N k Y s,x − Y s ′ ,x k G (4.24) ≤ C (cid:0) | s − s ′ | + V βs,s ′ ,N,p | s − s ′ | β (1 − dp ) + 1 {| L ′ s − L s | > | s − s ′ | / } (cid:1) . Note that C is independent of s , s ′ and N . Step III.
Using (4.24) we provide an estimate for d ( Y s , Y s ′ ) (cf. (3.14)) when 0 ≤ s < s ′ ≤ T .We have (see (4.22)) V βs,s ′ ,N,p ≤ h N β (2 d +1) p + (2 T k b k ) β (2 d +1) p + | L s ′ − L s | β (2 d +1) p i U βs ′ ,p and so d ( Y s , Y s ′ ) = X N ≥ N sup | x |≤ N k Y s,x − Y s ′ ,x k G | x |≤ N k Y s,x − Y s ′ ,x k G ≤ C | s − s ′ | + C {| L ′ s − L s | > | s − s ′ | / } + C U βs ′ ,p | s − s ′ | β (1 − dp ) X N ≥ N h N β (2 d +1) p + (2 T k b k ) β (2 d +1) p + | L s ′ − L s | β (2 d +1) p i ≤ C (cid:16) | s − s ′ | + 1 {| L ′ s − L s | > | s − s ′ | / } + U βs ′ ,p | s − s ′ | β (1 − dp ) (cid:0) | L s ′ − L s | β (2 d +1) p (cid:1)(cid:17) , where C = C ( β, T, k b k β,T , d, p ) >
0. Recall that p ≥ d has to be fixed. Step IV.
Let now 0 ≤ s < s < s ≤ T and set ρ = s − s . We will apply Corollary 4.2 with q = 8 /β . Let us fix p ≥ d (i.e., 1 − dp ≥ / d +1) p < θ and introduce the random variable Z = 1 + sup s ∈ [0 ,T ] | L s | d +1) p , Clearly we have that | L s ′ − L s | d +1) p ≤ Z , 0 ≤ s < s ′ ≤ T . Moreover by Remark2.1 we know that E [ Z ] < ∞ . Using Step III and the previous estimates we willcheck condition (4.3). In the sequel we denote by C k or c k positive constants whichmay depend on β, T, k b k β,T , θ and d but are independent of s , s and s . We haveΓ = E h(cid:16) d ( Y s , Y s ) · d ( Y s , Y s ) (cid:17) /β i ≤ C E h(cid:16) | s − s | /β + 1 {| L s − L s | > | s − s | / } + Z U s ,p | s − s | − dp (cid:17) · (cid:16) | s − s | /β + 1 {| L s − L s | > | s − s | / } + Z U s ,p | s − s | − dp (cid:17)i .
16e denote by c ≥ t /β ≤ c t − dp , t ∈ [0 , T ]. We obtain( ρ = s − s )Γ ≤ c C E h(cid:16) ρ − dp + 1 {| L s − L s | > | s − s | / } + Z U s ,p ρ − dp (cid:17) · (cid:16) ρ − dp + 1 {| L s − L s | > | s − s | / } + Z U s ,p ρ − dp (cid:17)i ≤ C (Γ + Γ + Γ + Γ ) , where Γ = E [1 {| L s − L s | > | s − s | / } · {| L s − L s | > | s − s | / } ],Γ = ρ − dp [ P ( | L s − L s | > | s − s | / + P ( | L s − L s | > | s − s | / )] , Γ = ρ − dp E [1 {| L s − L s | > | s − s | / } Z U s ,p + 1 {| L s − L s | > | s − s | / } Z U s ,p ] , Γ = ρ − dp ) + ρ − dp ) E [ ZU s ,p + ZU s ,p + Z U s ,p U s ,p ] . It is not difficult to treat Γ . Indeed we can use the Cauchy-Schwarz inequality andsup s ∈ [0 ,T ] E [ U ps,p ] = k ′ < ∞ (4.25)(see (3.6)) in order to control the expectation in Γ . For instance, we have E [ Z U s ,p U s ,p ] ≤ E [ Z ] / ( sup s ∈ [0 ,T ] E [ U s,p ]) / < ∞ , (4.26)since E [ Z ] < ∞ and p ≥ d . We obtainΓ ≤ C ρ − dp ) = C | s − s | − dp ) ≤ C | s − s | / . (4.27)To estimates the other terms we need to control P ( | L s | > | s | / ), s ≥ . To thispurpose we use Step I. We have P ( | L s | > s / ) ≤ P ( | B s | > s / /
3) + P ( | A s | > s / /
3) + P ( | C s | > s / / . By Chebychev inequality we get for s ≥ P ( | L s | > s / ) ≤ s / E [ | B s | + | A s | ] + 3 θ s θ/ E [ | C s | θ ] ≤ c ( s / + s − θ ) . (4.28)Using (4.28) and (4.25) we can estimate Γ and Γ . For instance, since the incrementsof L are independent and stationary, we findΓ ≤ ρ − dp [ P ( | L s − s | > | s − s | / ) + P ( | L s − s | > | s − s | / )] ≤ c ρ − dp ( ρ / + ρ − θ ) . We can proceed similary for Γ (see also (4.26)):Γ ≤ ρ − dp ( E [ Z ]) / ( sup s ∈ [0 ,T ] E [ U s,p ]) / h ( P ( | L s − s | > | s − s | / ) / +( P ( | L s − s | > | s − s | / )) / i ≤ C ρ − dp ( ρ / + ρ (1 − θ ) ) . − dp ) + 3 / > / − dp ) + (1 − θ ) > /
4. We getΓ + Γ ≤ C ρ = C | s − s | / . (4.29)Finally we considerΓ ≤ ( P ( | L s − s | > | s − s | / ) · ( P ( | L s − s | > | s − s | / ) (4.30) ≤ c ( ρ / + ρ − θ ) ) ≤ c | s − s | / . Collecting together estimates (4.27), (4.29) and (4.30) we arrive at E h(cid:16) d ( Y s , Y s ) · d ( Y s , Y s ) (cid:17) /β i ≤ C | s − s | / and this finishes the proof.Taking into account Theorem 4.4 and using the projections π x (see (3.15)), inthe sequel we write, for x ∈ R d , s, t ∈ [0 , T ], Z s = ( Z s,x ) x ∈ R d , with π x ( Z s ) = Z s,x ∈ G . (4.31)Recall that on some almost sure event Ω s , Y s,x = Z s,x , s ∈ [0 , T ], x ∈ R d (cf. (3.16)). Lemma 4.5.
Under the same assumptions of Lemma 4.3 consider the c`adl`ag process Z with values in C ( R d ; G ) of Theorem 4.4. The following statements hold:(i) There exists an almost sure event Ω (independent of s, t and x ) such that forany ω ∈ Ω , we have that t L t ( ω ) is c`adl`ag, L ( ω ) = 0 and s Z s ( ω ) is c`adl`ag;further, for any ω ∈ Ω , Z s,xt ( ω ) = x + Z ts b ( r, Z s,xr ( ω ) + L r ( ω ) − L s ( ω )) dr, s, t ∈ [0 , T ] , s ≤ t, x ∈ R d . Moreover, for s ≤ t , the r.v. Z s,xt is F Ls,t -measurable (if t ≤ s , Z s,xt = x ).(ii) There exists an almost sure event Ω and a B ([0 , T ]) × F -measurable function V n : [0 , T ] × Ω → [0 , ∞ ] , such that R T V n ( s, ω ) ds < ∞ , for any integer n > d , ω ∈ Ω , and, further, the following inequality holds on Ω sup t ∈ [0 ,T ] | Z s,xt − Z s,yt | ≤ | x − y | n − dn [( | x | ∨ | y | ) d +1 n ∨ V n ( s, · ) , x, y ∈ R d , s ∈ [0 , T ] . (4.32) (iii) There exists an almost sure event Ω such that for any ω ∈ Ω we have Z s,xt ( ω ) + L u ( ω ) − L s ( ω ) = Z u, Z s,xu ( ω )+ L u ( ω ) − L s ( ω ) t ( ω ) , (4.33) for any s, u, t ∈ [0 , T ] , ≤ s < u ≤ T , x ∈ R d .Proof. (i) On some almost sure event Ω ′ s (independent of t and x ) we know that( Y s,xt ) verifies the SDE (3.7) for any x ∈ R d and t ∈ [ s, T ]. Moreover Y s,xt = x , t < s .On the other hand on some almost sure event Ω s we have Y s,x = π x ( Y s ) = π x ( Z s ), for any x ∈ R d , see (4.31). Using ( Z s ), we can rewrite (3.7) on the eventΩ = T r ∈ Q ∩ [0 ,T ] (Ω ′ r ∩ Ω r ) as follows:[ π x ( Z s )] t = x + Z ts b ( r, [ π x ( Z s )] r + ( L r − L s )) dr, (4.34)18or any s ∈ Q ∩ [0 , T ], t ∈ [ s, T ], x ∈ R d . Note that by Theorem 4.4 for any ω ∈ Ω and any sequence s n → s + we have d ( Z s ( ω ) , Z s n ( ω )) → n → ∞ . Take now s ∈ [0 , T ) and let ( s n ) ⊂ Q ∩ [0 , T ] bea sequence monotonically decreasing to s . By the dominated convergence theoremand the right-continuity of L we have on Ω , for any t > s , x ∈ R d , [ π x ( Z s )] t = lim n →∞ [ π x ( Z s n )] t = x + lim n →∞ Z ts { r>s n } b ( r, [ π x ( Z s n )] r + ( L r − L s n )) dr = x + Z ts b ( r, [ π x ( Z s )] r + ( L r − L s )) dr and we get the assertion. (ii) Since on Ω we have Y s,x = π x ( Y s ) we obtain by (2.6) and (3.9), for any p ≥ s ∈ [0 ,T ] E [ sup s ≤ t ≤ T | X s,xt − X s,yt | p ] = sup s ∈ [0 ,T ] E [ sup ≤ t ≤ T | Y s,xt − Y s,yt | p ] (4.35)= sup s ∈ [0 ,T ] E [ k π x ( Z s ) − π y ( Z s ) k pG ] ≤ C ( T ) | x − y | p , x, y ∈ R d . Let s ∈ [0 , T ] and consider the random field ( π x ( Z s )) x ∈ R d with values in G . ApplyingTheorem 3.1 with ψ ( x, ω ) = π x ( Z s )( ω ) we obtain from (4.35) for p > d similarly to(3.5): there exists a V p ( s, ω ) ∈ [0 , ∞ ] such that, for any ω ∈ Ω, x, y ∈ R d , s ∈ [0 , T ], k π x ( Z s )( ω ) − π y ( Z s )( ω ) k G ≤ [( | x | ∨ | y | ) d +1 p ∨ V p ( s, ω ) | x − y | − d/p , (4.36)with V p ( s, ω ) = (cid:16) R R d R R d (cid:16) k π x ( Z s )( ω ) − π y ( Z s )( ω ) k G | x − y | (cid:17) p f ( x ) f ( y ) dxdy (cid:17) /p , ω ∈ Ω , s ∈ [0 , T ] ( f is defined in Theorem 3.1). Since the map: ( s, x, ω ) π x ( Z s )( ω ) is B ([0 , T ] × R d ) × F -measurable with values in G , it follows that the real map:( s, x, y, ω )
7→ k π x ( Z s )( ω ) − π y ( Z s )( ω ) k G | x − y | − { x = y } is B ([0 , T ] × R d ) × F -measurable. By the Fubini theorem we deduce that also V p : [0 , T ] × Ω → [0 , ∞ ] is B ([0 , T ]) × F -measurable. Hence we can consider therandom variable ω R T V p ( s, ω ) ds (with values in [0 , ∞ ]). Since, with the sameconstant C ( T ) appearing in (2.6),sup s ∈ [0 ,T ] E [ | V p ( s, · ) | p ] ≤ C ( d ) · C ( T ) , (4.37)we find E h(cid:16) R T V p ( s, · ) ds (cid:17) p i ≤ T p − R T E [( V p ( s, · )) p ] ds ≤ T p − c ( d ) C ( T ) < ∞ . Itfollows that, for any p > d, there exists an almost sure event Ω p such that Z T V p ( s, ω ) ds < ∞ , ω ∈ Ω p . (4.38)Let p = n . We find, for any n > d , R T V n ( s, ω ) ds < ∞ , when ω ∈ Ω = T n> d Ω n .Writing (4.36) for ω ∈ Ω and n > d we find the assertion. (iii) First note that the statement of Lemma 3.3 can be rewritten in term of theprocess Y s,x (see (3.9)) as follows: for any 0 ≤ s < u ≤ T there exists an almost sureevent Ω s,u (independent of t and x ) such that, for any ω ∈ Ω s,u , we have Y s,xt ( ω ) + L u ( ω ) − L s ( ω ) = Y u, Y s,xu ( ω )+ L u ( ω ) − L s ( ω ) t ( ω ) , for t ∈ [ u, T ] , x ∈ R d . (4.39)19ince ( Z s ) is a modification of ( Y s ) (see Theorem 4.4) we know that on some almostsure event Ω ′′ s,u ⊂ Ω s,u identity (4.39) holds when ( Y s,x ) is replaced by ( Z s,x ).Let us fix u ∈ (0 , T ]. We know that (4.39) holds for ( Z s,x ) when t ∈ [ u, T ], x ∈ R d and s ∈ [0 , u ) ∩ Q if ω ∈ Ω u = ∩ s ∈ [0 ,u ) ∩ Q (Ω ′′ s,u ∩ Ω ). Using that ( Z s,x ) with values in G is in particular right-continuous in s , uniformly in x , when x varies in compactsets of R d , it easy to check that (4.33) holds, for any 0 ≤ s < u ≤ T , x ∈ R d , t ∈ [ u, T ], when ω ∈ Ω u .Let us define Ω = T u ∈ Q ∩ [0 ,T ) Ω u ; fix any s, u ∈ [0 , T ], x ∈ R d , with 0 ≤ s
1) and find that (4.40) holds when u j is replaced by u . The proof of (4.33)is complete. Assertion (v) of the next theorem gives a Davie’s type uniqueness result for SDE(1.1). The other assertions collect results of Section 4 (see in particular Theorem 4.4and Lemma 4.5). These are used to prove the uniqueness property (v). We refer toCorollaries 5.4 and 5.5 for the case when b ( t, · ) is only locally H¨older continuous. We stress that all the next statements (i)-(v) hold when ω belongs to an almostsure event Ω ′ (independent of s , t ∈ [0 , T ] and x ∈ R d ). Theorem 5.1.
Let us consider the SDE (1.1) with b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) , β ∈ (0 , , and suppose that L and b satisfy Hypotheses 1 and 2. Then there exists afunction φ ( s, t, x, ω ) , φ : [0 , T ] × [0 , T ] × R d × Ω → R d , (5.1) which is B ([0 , T ] × [0 , T ] × R d ) × F -measurable and such that (cid:0) φ ( s, t, x, · ) (cid:1) t ∈ [0 ,T ] is astrong solution of (1.1) starting from x at time s. Moreover, there exists an almostsure event Ω ′ such that the following assertions hold for any ω ∈ Ω ′ . (i) For any x ∈ R d , the mapping: s φ ( s, t, x, ω ) is c`adl`ag on [0 , T ] (uniformly in t and x ), i.e., let s ∈ (0 , T ) and consider sequences ( s k ) and ( r n ) such that s k → s − and r n → s + ; we have, for any M > , lim n →∞ sup | x |≤ M sup t ∈ [0 ,T ] | φ ( r n , t, x, ω ) − φ ( s, t, x, ω ) | = 0 , (5.2)lim k →∞ sup | x |≤ M sup t ∈ [0 ,T ] | φ ( s k , t, x, ω ) − φ ( s − , t, x, ω ) | = 0 (similar conditions hold when s = 0 and s = T ).(ii) For any x ∈ R d , s ∈ [0 , T ] , φ ( s, t, x, ω ) = x if ≤ t ≤ s , and φ ( s, t, x, ω ) = x + Z ts b ( r, φ ( s, r, x, ω )) dr + L t ( ω ) − L s ( ω ) , t ∈ [ s, T ] . (5.3)20 iii) For any s ∈ [0 , T ] , the function x φ ( s, t, x, ω ) is continuous in x uniformly in t . Moreover, for any integer n > d , there exists a B ([0 , T ]) × F -measurable function V n : [0 , T ] × Ω → [0 , ∞ ] such that R T V n ( s, ω ) ds < ∞ and sup t ∈ [0 ,T ] | φ ( s, t, x, ω ) − φ ( s, t, y, ω ) | (5.4) ≤ V n ( s, ω ) | x − y | n − dn [( | x | ∨ | y | ) d +1 n ∨ , x, y ∈ R d , n > d, s ∈ [0 , T ] . (iv) For any ≤ s < r ≤ t ≤ T , x ∈ R d , we have φ ( s, t, x, ω ) = φ ( r, t, φ ( s, r, x, ω ) , ω ) . (5.5) (v) Let s ∈ [0 , T ) , τ = τ ( ω ) ∈ ( s , T ] and x ∈ R d . If a measurable function g : [ s , τ ) → R d solves the integral equation g ( t ) = x + Z ts b ( r, g ( r )) dr + L t ( ω ) − L s ( ω ) , t ∈ [ s , τ ) , (5.6) then we have g ( r ) = φ ( s , r, x, ω ) , for r ∈ [ s , τ ) .Proof. Let us consider the process Z = ( Z s ) s ∈ [0 ,T ] of Theorem 4.4 with values in C ( R d ; G ). Recall the notation Z s,xt = π x ( Z s )( t ) (see (3.15)). We define for ω ∈ Ω , s, t ∈ [0 , T ] , x ∈ R d : φ ( s, t, x, ω ) = Z s,xt ( ω ) + L t ( ω ) − L s ( ω ) , if s ≤ t, (5.7)and φ ( s, t, x, ω ) = x if s > t . The fact that, for any 0 ≤ s < t ≤ T , x ∈ R d , therandom variable φ ( s, t, x, · ) is F Ls,t -measurable follow from Theorem 4.4 and (i) inLemma 4.5. We also define Ω ′ = Ω ∩ Ω ∩ Ω , where the almost sure events Ω k , k = 1 , ,
3, are considered in Lemma 4.5.Assertions (i), (ii), (iii), (iv) follow directly from Theorem 4.4 and Lemma 4.5.More precisely, (i) and (ii) follow from the first assertion of Lemma 4.5 since ( Z s )takes values in C ( R d ; G ) with c`adl`ag paths. Assertions (iii) and (iv) follow respec-tively from the second and third assertion of Lemma 4.5. (v) Let ω ∈ Ω ′ be fixed and let g : [ s , τ [ → R d be a solution to the integral equation(5.6) corresponding to ω . Let us fix t ∈ ( s , τ ).We introduce an auxiliary function f : [ s , t ] → R d which is similar to the oneused in proof of Theorem 3.1 in [30], f ( s ) = φ ( s, t, g ( s ) , ω ) , s ∈ [ s , t ] . (5.8)We will show that f is constant on [ s , t ]. Once this is proved we can deduce that f ( t ) = f ( s ) and so we find g ( t ) = φ ( s , t, x, ω ) which shows the assertion since t isarbitrary. In the sequel we proceed in three steps. Step I.
We establish some estimates for | g ( r ) − φ ( u, r, g ( u ) , ω ) | when s ≤ u ≤ r ≤ t .Since g ( r ) = x + Z us b ( p, g ( p )) dp + ( L u ( ω ) − L s ( ω )) + Z ru b ( p, g ( p )) dp + ( L r ( ω ) − L u ( ω )) ,
21e obtain | g ( r ) − φ ( u, r, g ( u ) , ω ) | ≤ (cid:12)(cid:12)(cid:12) g ( u ) + Z ru b ( p, g ( p )) dp + ( L r ( ω ) − L u ( ω )) − g ( u ) − Z ru b ( p, φ ( u, p, g ( u ) , ω )) dp − ( L r ( ω ) − L u ( ω )) (cid:12)(cid:12)(cid:12) ≤ Z ru | b ( p, g ( p )) − b ( p, φ ( u, p, g ( u ) , ω )) | dp ≤ k b k | r − u | . Now using the H¨older continuity of b : | g ( r ) − φ ( u, r, g ( u ) , ω ) | ≤ Z ru | b ( p, g ( p )) − b ( p, φ ( u, p, g ( u ) , ω )) | dp (5.9) ≤ [ b ] β,T Z ru | g ( p ) − φ ( u, p, g ( u ) , ω ) | β dp ≤ (2 k b k ) β [ b ] β,T Z ru | p − u | β dp ≤ (2 k b k ) β [ b ] β,T | r − u | β . II Step.
We prove that f defined in (5.8) is continuous on [ s , t ].We first show that it is right-continuous on [ s , t ). Let us fix s ∈ [ s , t ) andconsider a sequence ( s n ) such that s n → s + . We prove that f ( s n ) → f ( s ) as n → ∞ . Note that | g ( r ) | ≤ M , r ∈ [ s , τ ), where M = | x | + T k b k + C ( ω ). We have | f ( s n ) − f ( s ) | ≤ | φ ( s n , t, g ( s n ) , ω ) − φ ( s, t, g ( s n ) , ω ) | + | φ ( s, t, g ( s n ) , ω ) − φ ( s, t, g ( s ) , ω ) | ≤ J n + I n , where I n = | φ ( s, t, g ( s n ) , ω ) − φ ( s, t, g ( s ) , ω ) | and J n = sup | x |≤ M sup t ∈ [0 ,T ] | φ ( s n , t, x, ω ) − φ ( s, t, x, ω ) | . Since g ( s n ) → g ( s ) by the right continuity of g we obtain that lim n →∞ I n = 0 thanksto (5.4). Moreover lim n →∞ J n = 0 thanks to (5.2).Let us show that f is left-continuous on ( s , t ]. We fix s ∈ ( s , t ] and consider asequence ( s k ) ⊂ ( s , s ) such that s k → s . We prove that f ( s k ) → f ( s ) as k → ∞ . Using the flow property (iv) we find | f ( s k ) − f ( s ) | = | φ ( s k , t, g ( s k ) , ω ) − φ ( s, t, g ( s ) , ω ) | = | φ ( s, t, φ ( s k , s, g ( s k ) , ω ) , ω ) − φ ( s, t, g ( s ) , ω ) | . By I Step we know that | φ ( s k , s, g ( s k ) , ω ) − g ( s ) | ≤ k b k | s k − s | (5.10)which tends to 0 as k → ∞ . Using (5.10) and the continuity property (iii) we obtainthe claim since lim k →∞ | φ ( s, t, φ ( s k , s, g ( s k ) , ω ) , ω ) − φ ( s, t, g ( s ) , ω ) | = 0 . III Step.
We prove that f is constant on [ s , t ].22e will use the following well known lemma (see, for instance, pages 239-240 in[36]): Let S be a real Banach space and consider a continuous mapping F : [ a, b ] ⊂ R → S , b > a . Suppose that for any h ∈ ( a, b ] there exists the left derivative d − Fdh ( h ) = lim h ′ → h − F ( h ′ ) − F ( h ) h ′ − h (5.11) and this derivative is identically zero on ( a, b ] . Then F is constant. Note that by considering continuous linear functionals on S one may reduce theproof of the lemma to the one of a real analysis result.To apply the previous lemma with [ s , t ] = [ a, b ] we first extend our function f to [ s , ∞ ) by setting f ( r ) = f ( t ) for r ≥ t . Then set S = L ([0 , t ]; R d ) and define F : [ s , t ] → S as follows: F ( h ) = f ( · + h ) ∈ S , h ∈ [ s , t ] , i.e., F ( h )( r ) = f ( r + h ) , r ∈ [0 , t ] . If we prove that the mapping F is constant then we deduce (taking h = s and h = t ) that f ( s + · ) = f ( t + · ) = f ( t ) in S . However, since f is continuous thisimplies that f is constant and finishes the proof.The continuity of F , i.e., for any h ∈ [ s , t ], we havelim h ′ → h k F ( h ) − F ( h ′ ) k S = lim h ′ → h Z t | f ( r + h ) − f ( r + h ′ ) | dr = 0 , is clear, using the continuity of f . Let us prove that the left derivative of F isidentically zero on ( s , t ].Using the flow property (iv) we find, for h, h ′ ∈ [ s , t ], h ′ < h and 0 ≤ r ≤ t − h , | f ( r + h ) − f ( r + h ′ ) | (5.12)= | φ ( r + h, t, g ( r + h ) , ω ) − φ ( r + h, t, φ ( r + h ′ , r + h, g ( r + h ′ ) , ω ) , ω ) | . Using (5.12) and changing variable we obtain (recall that f ( r ) = f ( t ), r ≥ t ) Z t | f ( r + h ) − f ( r + h ′ ) | dr (5.13)= Z t − h | φ ( r + h, t, g ( r + h ) , ω ) − φ ( r + h, t, φ ( r + h ′ , r + h, g ( r + h ′ ) , ω ) , ω ) | dr + Z t − h ′ t − h | f ( t ) − f ( r + h ′ ) | dr = Z th | φ ( p, t, g ( p ) , ω ) − φ ( p, t, φ ( p + h ′ − h, p, g ( p + h ′ − h ) , ω ) , ω ) | dp + Z t − h ′ t − h | f ( t ) − f ( r + h ′ ) | dr. In order to estimate k F ( h ) − F ( h ′ ) k S let us denote by λ f the modulus of continuityof f . Since in the last integral t − h + h ′ ≤ r + h ′ ≤ t we have the estimate Z t − h ′ t − h | f ( t ) − f ( r + h ′ ) | dr ≤ | h − h ′ | λ f ( | h − h ′ | )and lim r → + λ f ( r ) = 0. Taking into account that there exists a constant N = N ( x, T, k b k , ω ) ≥ | g ( r ) | + | φ ( r, u, g ( r ) , ω ) | ≤ N , s ≤ r ≤ u ≤ T,
23e find for p ∈ [ h, t ], n > d (see (5.4) and (5.9)) | φ ( p, t, g ( p ) , ω ) − φ ( p, t, φ ( p + h ′ − h, p, g ( p + h ′ − h ) , ω ) , ω ) |≤ V n ( p, ω ) | g ( p ) − φ ( p + h ′ − h, p, g ( p + h ′ − h ) , ω ) | n − dn N d +1 n ≤ (2 k b k ) β ( n − dn ) [ b ] n − dn β,T V n ( p, ω ) | h ′ − h | (1+ β )( n − dn ) N d +1 n . Recall that V n ( p, ω ) ∈ [0 , ∞ ] but R T V n ( p, ω ) dp < ∞ . Using the previous inequalityand (5.13) we obtain for h, h ′ ∈ [ s , t ] , h ′ < h Z t | f ( r + h ) − f ( r + h ′ ) | dr (5.14) ≤ C | h ′ − h | (1+ β )( n − dn ) Z T V n ( p, ω ) dp + | h − h ′ | λ f ( | h − h ′ | ) , where C = C ( β, k b k β,T , ω, T, x, n, d ) >
0. Now we choose n large enough such that(1 + β )( n − dn ) >
1. Dividing by | h − h ′ | and passing to the limit as h ′ → h − in (5.14)we find lim h ′ → h − | h − h ′ | k F ( h ) − F ( h ′ ) k L ([0 ,t ]; R d ) = 0 . This shows that there exists the left derivative of F in each h ∈ ( s , t ] and thisderivative is identically zero on ( s , t ]. By the lemma mentioned at the beginning ofIII Step we obtain that F is constant. Thus f is constant on [ s , t ] and this finishesthe proof. Remark . Note that if g : [ s , τ ] → R d , τ = τ ( ω ) ∈ ( s , T ], solves (5.6) on [ s , τ ]then we have g ( τ ) = φ ( s , τ, x, ω ), ω ∈ Ω ′ . Indeed applying (v) on [ s , τ ) we can usethat R τs b ( r, g ( r )) dr = R τs b ( r, φ ( s , r, x, ω )) dr. Remark . It is a natural question if one can improve (5.4) in Theorem 5.1. Apossible stronger assertion could be the following one: for each α ∈ (0 ,
1) and N ∈ R one can find C ( α, T, N, ω ) < ∞ such that, for any x, y ∈ R d , | x | , | y | < N ,sup s ∈ [0 ,T ] sup t ∈ [ s,T ] | φ ( s, t, x, ω ) − φ ( s, t, y, ω ) | ≤ C ( α, T, N, ω ) | x − y | α , ω ∈ Ω ′ . (5.15)This condition is stated as property 4 in Proposition 2.3 of [30] for SDEs (1.1) when L is a Wiener process and b ∈ L q ([0 , T ]; L p ( R d )), d/p + 2 /q < b ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) we do not expect that (5.15) holds in gen-eral when L and b satisfy Hypotheses 1 and 2. Indeed a basic strategy to get (5.15)when L is a Wiener process is to use the Kolmogorov-Chentsov test to obtain aH¨older continuous dependence on ( s, t, x ); one cannot use this approach when L isa discontinuous process. Finally note that the proof of (5.15) given in [30] is notcomplete ((5.15) does not follow directly from estimate (4) in page 5 of [30] applyingthe Kolmogorov-Chentsov test).Now we present two corollaries of Theorem 5.1 which deal with SDEs (1.1) withpossibly unbounded b .When b : [0 , T ] × R d → R d is measurable and satisfies, for any η ∈ C ∞ ( R d ), b · η ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) we say that b ∈ L ∞ (0 , T ; C ,βloc ( R d ; R d )). By a localizationprocedure we get 24 orollary 5.4. Let b ∈ L ∞ (0 , T ; C ,βloc ( R d ; R d )) , β ∈ (0 , , and suppose that, for any η ∈ C ∞ ( R d ) , the L´evy process L and b · η satisfy Hypotheses 1 and 2.Then there exists an almost sure event Ω ′′ such that, for any ω ′′ ∈ Ω ′′ , x ∈ R d , s ∈ [0 , T ) and τ = τ ( ω ′′ ) ∈ ( s , T ] , if g , g : [ s , τ ) → R d are c`adl`ag solutions of (5.6) when ω = ω ′′ , starting from x , then g ( r ) = g ( r ) , r ∈ [ s , τ ) .Proof. Let ϕ ∈ C ∞ ( R d ) be such that ϕ = 1 on {| x | ≤ } and ϕ ( x ) = 0 if | x | > b n ( t, x ) = b ( t, x ) ϕ ( xn ), t ∈ [0 , T ], x ∈ R d and n ≥
1. Consider for each n analmost sure event Ω ′ n related to b n ∈ L ∞ (0 , T ; C ,βb ( R d ; R d )) by Theorem 5.1; setΩ ′′ = ∩ n ≥ Ω ′ n . Suppose that g , g are solutions of (5.6) for a fixed ω ′′ ∈ Ω ′′ . Let τ ( n ) k = τ ( n ) k ( ω ′′ ) = inf { t ∈ [ s , τ ) : | g k ( t ) | ≥ n } , k = 1 , | g k ( s ) | < n , for any s ∈ [ s , τ ) then we set τ ( n ) k = τ ). Define τ ( n ) = τ ( n )1 ∧ τ ( n )2 and note that on Ω ′′ τ ( n ) ↑ τ as n → ∞ . Since on [ s , τ ( n ) ( ω ′′ )) both g and g solve an equation like (5.6)with b replaced by b n and ω = ω ′′ we can apply (v) of Theorem 5.1 and concludethat g = g on [ s , τ ( n ) ( ω ′′ )). Since this holds for any n ≥ g = g on[ s , τ ( ω ′′ )).Next we construct ω by ω strong solutions to (1.1) when b is possibly unbounded.To simplify we deal with the initial time s = 0. Corollary 5.5.
Suppose that L and b verify the assumptions of Corollary 5.4. More-over assume that | b ( t, x ) | ≤ C (1 + | x | ) , x ∈ R d , t ∈ [0 , T ] , (5.16) for some constant C > . Let x ∈ R d and s = 0 . Then there exists a (unique) strongsolution to (1.1) starting from x .Proof. We know that t L t ( ω ) is c`adl`ag for any ω ∈ Ω ′ , where Ω ′ is an almost sureevent. When ω ∈ Ω ′ a standard argument based on the Ascoli-Arzela theorem showsthat there exists a continuous solution v = v ( · , ω ) to v ( t ) = x + R t b ( s, v ( s )+ L s ( ω )) ds on [0 , T ]. We define v ( t, ω ) = 0, if ω Ω ′ , t ∈ [0 , T ]. By using the function ϕ as inthe proof of Corollary 5.4 we introduce b n ( t, x ) = b ( t, x ) ϕ ( xn ), t ∈ [0 , T ], x ∈ R d and n ≥
1. According to Theorem 5.1 for each n there exists a function φ n as in (5.1)and an almost sure event Ω ′ n corresponding to b n such that assertions (i)-(v) hold.Set Ω ′′ = ( ∩ n ≥ Ω ′ n ) ∩ Ω ′ .Define g ( t, ω ) = v ( t, ω ) + L t ( ω ), t ∈ [0 , T ], ω ∈ Ω , and set τ ( n ) = τ ( n ) ( ω ) =inf { t ∈ [0 , T ) : | g ( t, ω ) | ≥ n } (if | g ( s, ω ) | < n , for any s ∈ [0 , T ) then we set τ ( n ) ( ω ) = T ). Note that on Ω ′′ we have τ ( n ) ↑ T as n → ∞ .Let ω ∈ Ω ′′ and n ≥
1. Since on [0 , τ ( n ) ( ω )) g ( · , ω ) solves an equation like(5.6) with s = 0 and b replaced by b n + k , k ≥ , we can apply (v) of Theorem5.1 and get that g ( t, ω ) = φ n + k (0 , t, x, ω ), for any t ∈ [0 , τ ( n ) ( ω )), k ≥ . Since τ ( n ) ↑ T we deduce that, uniformly on compact sets of [0 , T ), for any ω ∈ Ω ′′ , wehave lim n →∞ φ n (0 , t, x, ω ) = g ( t, ω ) . It follows that g ( t, · ) is F Lt -measurable, for any t ∈ [0 , T ). By setting g ( T, ω ) = x + R T b ( r, g ( r, ω )) dr + L T ( ω ), we get that ( g ( t, · ))is a strong solution on [0 , T ] . Remark . The previous condition (5.16) can be relaxed, by requiring that, forfixed x ∈ R d , s = 0 and ω ∈ Ω ′ , there exists a continuous solution to the integralequation v ( t ) = x + R t b ( s, v ( s ) + L s ( ω )) ds on [0 , T ]. The assertion about existenceand uniqueness of a strong solution starting from x remains true.25 Uniqueness for SDEs driven by stable L´evy processes
In this section using also results from [24] and [25] we show that Theorem 5.1 can beapplied to a class of SDEs driven by non-degenerate α -stable type processes L . Let s ≥
0, we are considering X t ( ω ) = x + Z ts b ( X u ( ω )) du + L t ( ω ) − L s ( ω ) , (6.1) x ∈ R d , d ≥ t ≥ s, where b ∈ C ,βb ( R d , R d ), β ∈ [0 , . We deal with pure-jumpL´evy process L (without drift term), i.e., we assume that the generating triplet is( ν, ,
0) (i.e., Q = 0 and a = 0 as in (2.5)). To state our assumptions on L we use theconvolution semigroup ( P t ) associated to L (or to its L´evy measure ν ) and acting on C b ( R d ), i.e., P t : C b ( R d ) → C b ( R d ), t ≥ P t f ( x ) = E [ f ( x + L t )] = Z R d f ( x + z ) µ t ( dz ) , t > , f ∈ C b ( R d ) , x ∈ R d , where µ t is the law of L t , and P = I (cf. [28] or [1]). The generator L of ( P t ) is L g ( x ) = Z R d (cid:0) g ( x + y ) − g ( x ) − {| y |≤ } h y, Dg ( x ) i (cid:1) ν ( dy ) , x ∈ R d , (6.2)with g ∈ C ∞ ( R d ) (see Section 6.7 in [1] and Section 31 in [28]). We now considerthe Blumenthal-Getoor index α = α ( ν ) (see [5]): α = inf n σ > Z {| x |≤ } | y | σ ν ( dy ) o < ∞ ; (6.3)we always have α ∈ [0 , α ∈ (0 , ν . Hypothesis . Let α ∈ (0 , . The convolution semigroup ( P t ) verifies: P t ( C b ( R d )) ⊂ C b ( R d ), t >
0, and, moreover, there exists c α = c α ( ν ) > x ∈ R d | DP t f ( x ) | ≤ c α t − α · sup x ∈ R d | f ( x ) | , t ∈ (0 , , f ∈ C b ( R d ) . (6.4)Note that Hypothesis 3 implies both Hypotheses 1 and 2 in [25] (taking α = α ).Indeed since α ∈ (0 ,
2) we have R {| x |≤ } | y | σ ν ( dy ) < ∞ , for σ > α . To check thevalidity of the gradient estimate (6.4) we only mention a criterion which is given in[25]; it is based on Theorem 1.3 in [29]. Theorem 6.1.
Let L be a pure-jump L´evy process. A sufficient condition in orderthat (6.4) holds when α replaced by γ ∈ (0 , is the following one: the L´evy measure ν of L verifies: ν ( B ) ≥ ν ( B ) , B ∈ B ( R d ) , where ν is a L´evy measure on R d suchthat its corresponding symbol ψ ( h ) = − R R d (cid:0) e i h h,y i − − i h h, y i {| y |≤ } ( y ) (cid:1) ν ( dy ) , satisfies, for some positive constants c , c and M , c | x | γ ≤ Re ψ ( x ) ≤ c | x | γ , when | x | > M. (6.5)26 xamples . The next examples of α -stable type L´evy processes are also consideredin [25]. It is easy to check that in each example α = α ∈ (0 , . Thanks to Theorem6.1 also (6.4) holds in each example.Consider the following L´evy measure ˜ ν :˜ ν ( B ) = Z r dtt α Z S B ( tξ ) µ ( dξ ) , B ∈ B ( R d ) (6.6)(cf. Example 1.5 of [29] with the index β of [29] which is equal to ∞ ). Here r > µ is a non-degenerate finite non-negative measure on B ( R d ) with supporton the unit sphere S (non-degeneracy of µ is equivalent to say that its support isnot contained in a proper linear subspace of R d ), α ∈ (0 , ν verifies Hypothesis 3 since its symbol ˜ ψ verifies (6.5) with γ = α . This was alreadyremarked in page 1146 of [29]. We only note that, if h = 0, we have Re ˜ ψ ( h ) = Z r dtt α Z S h − cos (cid:16) h h | h | , t | h | ξ i (cid:17)i µ ( dξ ) . By changing variable s = t | h | after some computations one arrives at (6.5).Moreover Hypothesis 2 holds. Note that R {| x | > } | y | θ ˜ ν ( dy ) < ∞ , θ ∈ (0 , α ). Usingalso ˜ ν we find that the next examples of L´evy processes verify Hypotheses 2 and 3.(i) L is a non-degenerate symmetric α -stable process (see, for instance, [28] andthe references therein). In this case ν ( B ) = R ∞ dtt α R S B ( tξ ) µ ( dξ ), B ∈ B ( R d ), α ∈ (0 , µ is as in (6.6). A standard rotationally invariant α -stable process L belongs to this class since its L´evy measure has density c | x | d + α (with respect to theLebesgue measure in R d ).(ii) L is a α -stable temperated process of special form. Here ν ( B ) = Z ∞ e − t dtt α Z S B ( tξ ) µ ( dξ ) , B ∈ B ( R d ) , where µ is as in (6.6), α ∈ (0 , ν ( B ) ≥ e − ˜ ν ( B ), B ∈ B ( R d ), where ˜ ν is givenin (6.6) with r = 1.(iii) L is a truncated α -stable process. In this case ν ( B ) = c R {| x |≤ } B ( x ) | x | d + α dx B ∈B ( R d ) , α ∈ (0 , . (iv) L is a relativistic α -stable process (cf. [27] and see the references therein). Here ψ ( h ) = (cid:0) | h | + m α (cid:1) α − m , for some m > α ∈ (0 , h ∈ R d , and so (6.4) holds.Moreover by Lemma 2 in [27] we know that ν has the density C α,d | x | − d − α e − m /α | x | · φ ( m /α | x | ) , x = 0 , with 0 ≤ φ ( s ) ≤ c α,d,m ( s d − α + 1), s ≥
0. Hence α = α andalso Hypothesis 2 holds for any θ > . We first present results on strong existence and uniqueness for (6.1) when s = 0which are special cases of Lemma 5.2 and Theorem 5.3 in [25]. Then we study L p -dependence from the initial condition x following Theorem 4.3 in [24]. Finally inTheorem 6.6 we will consider the general case when s ∈ [0 , T ].27ll these theorems do not require the gradient estimates (6.4). However theyassume the Blumenthal-Getoor index α ∈ (0 , b ∈ C b ( R d , R d ) and classical solv-ability of the following Kolmogorov type equation: λu ( x ) − L u ( x ) − Du ( x ) b ( x ) = b ( x ) , x ∈ R d , (6.7)where b : R d → R d is given in (6.1), L in (6.2) and λ >
0; the equation is intendedcomponentwise, i.e., u : R d → R d and, setting L b = L + b ( x ) · D , λu k ( x ) − L b u k ( x ) = b k ( x ) , k = 1 , . . . , d, (6.8)with u ( x ) = ( u k ( x )) k =1 ,...,d and b ( x ) = ( b k ( x )) k =1 ,...,d . The approach to get stronguniqueness passing through solutions to (6.7) is similar to the one used in Section 2of [12] (see also [35]).Remark that L g ( x ) in (6.2) is well defined even for g ∈ C γb ( R d ) if α < γ and γ ∈ [0 ,
1) (cf. formula (13) in [25]). Indeed when | y | ≤ | g ( y + x ) − g ( x ) − y · Dg ( x ) | ≤ [ Dg ] γ | y | γ , x ∈ R d . In addition L g ∈ C b ( R d ) when g ∈ C γb ( R d ) and 1 + γ > α . The next re-sult is stated in Theorem 5.3 of [25] in a more general form which also shows thedifferentiability of solutions with respect to x and the homeomorphism property. Theorem 6.3.
Let L be any L´evy process on (Ω , F , P ) with generating triplet ( ν, , such that α = α ( ν ) ∈ (0 , (see (6.3) ) and let b ∈ C b ( R d , R d ) in (6.1) . Supposethat, for some λ > , there exists u = u λ ∈ C γb ( R d , R d ) , γ ∈ (0 , and γ > α ,which solves (6.7) . Moreover, assume k Du λ k < / .Then on (Ω , F , P ) , for any x ∈ R d , there exists a pathwise unique strong solution ( X xt ) t ≥ to (6.1) when s = 0 . Next we formulate a special case of Lemma 5.2 in [25]. It uses the stochasticintegral against the compensated Poisson random measure ˜ N (see, for instance, [20]). Lemma 6.4.
Under the same hypotheses of Theorem 6.3 let
T > and suppose that ( X xt ) t ∈ [0 ,T ] is a strong solution of (6.1) on [0 , T ] when s = 0 (starting from x ∈ R d ),then, using u λ of Theorem 6.3, we have, P -a.s., for any t ∈ [0 , T ] , u λ ( X xt ) − u λ ( x ) (6.9)= x + L t − X xt + λ Z t u λ ( X xs ) ds + Z t Z R d \{ } [ u λ ( X xs − + y ) − u λ ( X xs − )] ˜ N ( ds, dy ) . Proof.
The assertion is stated in Lemma 5.2 of [25] for weak solutions ( X xt ) t ≥ withthe condition 1 + γ > α , γ ∈ (0 , X xt ) t ∈ [0 ,T ] which solves (6.1) on [0 , T ] (the proof is based on Itˆo’s formulafor u λ ( X xt )); further the condition 2 γ > α of Theorem 6.3 implies 1 + γ > α .To prove Davie’s uniqueness for (6.1) we need the following L p -continuity of thesolutions w.r.t. initial conditions. Theorem 6.5.
Under the same hypotheses of Theorem 6.3 let
T > , s = 0 , andconsider two strong solutions ( X xt ) t ∈ [0 ,T ] and ( X yt ) t ∈ [0 ,T ] of (6.1) on [0 , T ] which are efined on (Ω , F , P ) , starting from x and y ∈ R d respectively. For any t ∈ [0 , T ] , p ≥ , we have E (cid:2) sup ≤ s ≤ t | X xs − X ys | p (cid:3) ≤ C ( t ) | x − y | p , (6.10) with C ( t ) = C ( t, ν, p, λ, d, γ, k u λ k C γb ) > which is independent of x and y ; here u λ is as in Theorem 6.3 (further C ( t, ν, p, λ, d, γ, · ) is increasing).Proof. The proof follows the one of (i) in Theorem 4.3 of [24]. We only give a sketchof the proof here. We set X = X x , Y = X y and u = u λ . We have from Lemma 6.4, P -a.s., using that k Du k ≤ / | X t − Y t | ≤ (cid:0) Γ ( t ) + Γ ( t ) + Γ ( t ) + Γ (cid:1) , whereΓ ( t ) = (cid:12)(cid:12)(cid:12) Z t Z {| z | > } [ u ( X s − + z ) − u ( X s − ) − u ( Y s − + z ) + u ( Y s − )] ˜ N ( ds, dz ) (cid:12)(cid:12)(cid:12) , Γ ( t ) = λ Z t | u ( X s ) − u ( Y s ) | ds, Γ ( t ) = (cid:12)(cid:12)(cid:12) Z t Z {| z |≤ } [ u ( X s − + z ) − u ( X s − ) − u ( Y s − + z ) + u ( Y s − )] ˜ N ( ds, dz ) (cid:12)(cid:12)(cid:12) , Γ = | u ( x ) − u ( y ) | + | x − y | ≤ | x − y | . Remark that, P -a.s.,sup ≤ r ≤ t | X r − Y r | p ≤ C | x − y | p + C P j =1 sup ≤ r ≤ t Γ j ( r ) p . By the H¨older inequality, sup ≤ r ≤ t Γ ( r ) p ≤ C t p − R t sup ≤ s ≤ r | X s − Y s | p dr, where C = C ( p, λ, k u λ k C γb ). To estimate Γ and Γ we use L p -estimates for stochasticintegrals against ˜ N (cf. [20, Theorem 2.11] or the proof of Proposition 6.6.2 in [1]).We find, since | u ( X s − + z ) − u ( Y s − + z ) + u ( Y s − ) − u ( X s − ) | ≤ | X s − − Y s − | , setting A = {| z | > } , E [ sup ≤ r ≤ t Γ ( r ) p ] ≤ C E h(cid:16) Z t ds Z A | u ( X s − + z ) − u ( Y s − + z ) + u ( Y s − ) − u ( X s − ) | ν ( dy ) (cid:17) p/ i + C E Z t ds Z A | u ( X s − + z ) − u ( Y s − + z ) + u ( Y s − ) − u ( X s − ) | p ν ( dy ) ≤ C (1 + t p/ − ) Z t E [ sup ≤ r ≤ s | X r − Y r | p ] ds, where C = R {| z | > } ν ( dz ) + (cid:0) R {| z | > } ν ( dz ) (cid:1) p/ . To treat Γ we need the hypothesis2 γ > α . By L p -estimates of stochastic integrals and using Lemma 4.1 in [24] we get E [ sup ≤ r ≤ t Γ ( r ) p ] ≤ C k u k pC γb E h(cid:16) Z t dr Z {| z |≤ } | X r − Y r | | z | γ ν ( dz ) (cid:17) p/ i + C k u k pC γb E Z t | X r − Y r | p dr Z {| z |≤ } | z | γp ν ( dz ) . Note that R {| z |≤ } | z | pγ ν ( dz ) < ∞ , since p ≥ γ > α . Collecting the previousestimates, we arrive at E [ sup ≤ r ≤ t | X r − Y r | p ] ≤ C | x − y | p + C (1 + t p − ) Z t E [ sup ≤ r ≤ s | X r − Y r | p ] ds, = C ( ν , p , λ, d, γ ) >
0. By the Gronwall lemma we obtain the assertion with C ( t ) = C exp (cid:0) C (1 + t p − ) (cid:1) .As a consequence of the previous results we get Theorem 6.6.
Under the same hypotheses of Theorem 6.3 let
T > and s ∈ [0 , T ] . Then, for any x ∈ R d , there exists a pathwise unique strong solution ˜ X s,x =( ˜ X s,xt ) t ∈ [0 ,T ] to (6.1) on (Ω , F , P ) (recall that ˜ X s,xt = x for t ≤ s ). Moreover if U s,x and U s,y are two strong solutions on [0 , T ] defined on (Ω , F , P ) and starting at x and y , then we have, for p ≥ , sup s ∈ [0 ,T ] E [ sup s ≤ t ≤ T | U s,xt − U s,yt | p ] ≤ C ( T ) | x − y | p , x, y ∈ R d , (6.11) where C ( T ) = C ( T, ν, p, λ, d, γ, k u λ k C γb ) > as in (6.10) .Proof. Existence. Let us fix s ∈ [0 , T ] and consider the new process L ( s ) = ( L ( s ) t ) on(Ω , F , P ), L ( s ) t = L s + t − L s , t ≥
0. This is a L´evy process with the same generatingtriplet of L and is independent of F Ls (see Proposition 10.7 in [28]). According toTheorem 6.3 there exists a unique strong solution to X t = x + Z t b ( X r ) dr + L ( s ) t , t ≥ , (6.12)which we denote by ( X xt,L ( s ) ) to stress its dependence on L ( s ) . Note that, for any t ≥ X xt,L ( s ) is measurable with respect to F L ( s ) t = F Ls,t + s . Let us define a newprocess with c`adl`ag paths ( ˜ X s,xt ) t ∈ [0 ,T ] ,˜ X s,xt = X xt − s,L ( s ) , for s ≤ t ≤ T ; ˜ X s,xt = x, ≤ t ≤ s. (6.13)Writing V t = ˜ X s,xt , t ∈ [0 , T ], to simplify notation, we note that V t is F Ls,t -measurable, t ≥ s . Moreover it solves equation (6.1); indeed, for t ∈ [ s, T ], V t = X xt − s,L ( s ) = x + Z t − s b ( X xr,L ( s ) ) dr + L t − L s = x + Z ts b ( V r ) dr + L t − L s . Uniqueness.
Let ( U s,xt ) be another strong solution. We have, P -a.s., for s ≤ t ≤ T , U s,xt − s + s = x + Z ts b ( U s,xr ) dr + L t − L s = x + Z t − s b ( U s,xr + s ) dr + L t − L s = x + Z t − s b ( U s,xr + s ) dr + L ( s ) t − s . Hence ( U s,xr + s ) r ∈ [0 ,T − s ] solves (6.12) on [0 , T − s ]. By (6.10) we get P ( U s,xr + s = X xr,L ( s ) , r ∈ [0 , T − s ]) = P ( U s,xr + s = ˜ X s,xr + s , r ∈ [0 , T − s ]) = 1 . This shows the assertion. 30 p -estimates. We have for any fixed s ∈ [0 , T ], p ≥ E [sup s ≤ t ≤ T | U s,xt − U s,yt | p ] = E [sup s ≤ t ≤ T | X xt − s, L ( s ) − X yt − s, L ( s ) | p ] by uniqueness. Using (6.10) we getsup s ∈ [0 ,T ] E [ sup s ≤ t ≤ T | U s,xt − U s,yt | p ] = sup s ∈ [0 ,T ] E [ sup s ≤ t ≤ T | X xt − s, L ( s ) − X yt − s, L ( s ) | p ] ≤ sup s ∈ [0 ,T ] E [ sup t ∈ [0 ,T ] | X xt, L ( s ) − X yt, L ( s ) | p ] ≤ C ( T ) | x − y | p . α ∈ [1 , Here we prove a Davie’s type uniqueness result for (6.1) (cf. Theorem 5.1). Weconsider the Blumenthal-Getoor index α ∈ [1 ,
2) (see (6.3)) and assume as in [24]and [25] that b ∈ C ,βb ( R d , R d ) with β ∈ (cid:0) − α , (cid:3) .To check Hypothesis 1 we will use Theorem 6.6 and the following purely analyticresult (see Theorem 4.3 in [25]; its the proof follows the one in Theorem 3.4 of [24]).Note that the next hypothesis α + β < λ ≥ λ > Theorem 6.7.
Assume Hypothesis 3 with α = α ( ν ) ≥ . Let < β < with α + β ∈ (1 , and consider L in (6.2) . Then, for any λ ≥ , f ∈ C βb ( R d ) , thereexists a unique solution w λ ∈ C α + βb ( R d ) to λw ( x ) − L w ( x ) − b ( x ) · Dw ( x ) = f ( x ) , x ∈ R d (6.14) Moreover, there exists C = C ( α ( ν ) , d, β, k b k C βb , ν ) > such that λ k w λ k + [ Dw λ ] C α β − b ≤ C k f k C βb , λ ≥ . (6.15) Finally, we have k Dw λ k < / , for any λ ≥ λ ( d, k b k C βb , α ( ν ) , β, ν ) ≥ .Proof. We only make some comments on C and λ . Let us first consider C . To seethat C = C ( α ( ν ) , d, β, k b k C βb , ν ) we look into the proof of Theorem 4.3 in [25]. Insuch proof the Schauder estimates (6.15) are first established as apriori estimates bya localization procedure. This method is based on Schauder estimates already provedin the constant coefficients case, i.e., when b ( x ) = k , x ∈ R d (see Theorem 4.2 in [25]).The Schauder constant C depends on the Schauder constant c appearing in formula(16) of Theorem 4.2 in [25] when λ ≥
1. Such constant c depends on α ( ν ) , β, d andalso on the constant c α of the gradient estimates (6.4) (see, in particular, estimates(18)-(21) in the proof of Theorem 4.2 in [25]).Let us consider λ . Recall the simple estimate k Dw λ k ≤ N [ Dw λ ] α β C α β − b k w λ k α β − α β , where N = N ( α , β, d ) (cf. the proof of Theorem 3.4 in [24]). By (6.15) we get k Dw λ k ≤ N C λ − α β − α β k f k C βb , λ ≥
1, and the assertion follows by choosing λ > ∨ (3 N C ) α βα β − .Currently we do not know if the statements in Theorem 6.7 hold also when α ∈ (0 ,
1) (maintaining all the other assumptions).Now we apply Theorem 5.1 to get Davie’s type uniqueness for the SDE (6.1).31 heorem 6.8.
Let L be a d -dimensional L´evy process on (Ω , F , P ) with gener-ating triple ( ν, , satisfying Hypothesis 3 with α ∈ [1 , . Suppose also that R {| x | > } | y | θ ν ( dy ) < ∞ , for some θ > . Let us consider (6.1) with b ∈ C ,βb (cid:0) R d ; R d (cid:1) and β ∈ (cid:0) − α , (cid:3) .Then L satisfies Hypothesis 1 and, for any T > , there exists a function φ as inTheorem 5.1 such that assertions (i)-(v) hold on some almost sure event Ω ′ .Proof. When β = 1 Hypothesis 1 is clearly satisfied. Let us consider β ∈ (cid:0) − α , (cid:1) . Since C β ′ b ( R d , R d ) ⊂ C βb ( R d , R d ) when 0 < β ≤ β ′ ≤
1, we may assume that 1 − α <β < − α . To verify Hypothesis 1 we use Theorems 6.7 and 6.6. By Theorem 6.7we have a solution u λ ∈ C γb ( R d , R d ) to (6.7) with γ = α − β ∈ (0 ,
1) for any λ ≥
1. Note that 2 γ = 2 α − β > α . Choosing λ = λ ( d, k b k C βb , α ( ν ) , β ) weobtain that also k Du λ k < / C ( T ) appearing in (6.11) depends on T , ν , p , α ( ν ), λ, d , γ and k u λ k C γb . Howeverby Theorem 6.7 γ = α − β , λ = λ ( d, k b k C βb , α , β ) and k u λ k C γb = k u λ k C α βb ≤ N ( α , β, d ) C k b k C βb where C appears in the Schauder estimates (6.15). It followsthat C ( T ) in (6.11) has the right dependence on d, p, β, ν , k b k C βb and T as requiredin (2.6). To finish the proof we apply Theorem 5.1 since Hypotheses 1 and 2 hold. Remark . Theorem 6.8 shows that under suitable assumptions on L and b Davie’suniqueness (or path-by-path uniqueness) holds for the SDE (1.1). Moreover, theunique strong solution is given by a function φ which satisfies all the assertions ofTheorem 5.1, including (5.2) and (5.4), for any ω ∈ Ω ′ , where Ω ′ is an almost sureevent independent of s, t and x . There are no similar results in the literature onstochastic flows for SDEs (1.1) driven by stable type processes (cf. [24], [25] and therecent paper [6] which contains the most general available results about existenceand C -regularity of stochastic flow). α = α ∈ (0 , Here we only consider the SDE (6.1) when L = L α is a symmetric rotationallyinvariant α -stable process with α ∈ (0 ,
1) (the case of α ∈ [1 ,
2) is already treatedin Theorem 6.8). For each α ∈ (0 ,
1) its L´evy measure ν = ν α has density c α,d | y | d + α ,y = 0 , and its generator L = L ( α ) (see (6.2)) coincides with the fractional Laplacian − ( −△ ) α/ (see Example 32.7 in [28]). Note that, for any g ∈ C b ( R d ), the mapping: x
7→ L g ( x ) = c α,d Z R d g ( x + y ) − g ( x ) | y | d + α dy belongs to C b ( R d ). (6.16)Clearly α = α (see (3.1)). Using Theorem 6.6 of the previous section together withTheorem 6.11 we can apply Theorem 5.1 and obtain Theorem 6.10.
Let L be a d -dimensional symmetric rotationally invariant α -stableprocess with α ∈ (0 , defined on (Ω , F , P ) . Let us consider the SDE (6.1) with b ∈ C ,βb (cid:0) R d ; R d (cid:1) and β ∈ (cid:0) − α , (cid:3) .Then L satisfies Hypotheses 1 and 2 and, for any T > , there exists a function φ as in Theorem 5.1 such that assertions (i)-(v) hold on some almost sure event Ω ′ .
32e first state a result which is related to Theorem 6.7. It shows sharp C α + βb -regularity of solutions to (6.14). The proof is based on Theorem 1.1 in [31]. Theorem 6.11.
Let us consider the fractional Laplacian L given in (6.16) with α ∈ (0 , . Let β ∈ (0 , such that α + β > . Then, for any λ ≥ , f ∈ C βb ( R d ) ,there exists a unique solution w = w λ ∈ C α + βb ( R d ) to (6.14) . Moreover, there exists C = C ( α, d, β, k b k C βb ) > such that λ k w λ k + [ Dw λ ] C α + β − b ≤ C k f k C βb , λ ≥ . (6.17) Finally, we have k Dw λ k < / , for any λ ≥ λ , with λ ( d, k b k C βb , α, β ) ≥ .Proof. The uniqueness follows by the maximum principle (see Proposition 3.2 in [24]or Proposition 4.1 in [25]) which states that λ k w λ k ≤ k f k . Let L b be the fractionalLaplacian L plus the drift b (i.e., L b = L + b · D ). The proof proceeds in some steps. I step.
Let λ ≥
1. We provide apriori estimates for classical C b -solutions u to λu − L b u = f on R d (with f ∈ C βb ( R d ), b ∈ C βb ( R d ; R d ) and α + β > u = u λ ∈ C b ( R d ) be a solution to λu − L b u = f on R d ; in the sequel we willconsider open balls B r ( x ) of center x ∈ R d and radius r >
0. Let x ∈ R d . Onecan define v ( x ) = u ( x + x ), x ∈ R d . Since L v ( x ) = L u ( x + x ), x ∈ R d , we get that v ∈ C b ( R d ) solves λv − L b v = f on R d where L b has the drift b ( · ) = b ( · + x ) and f ( · ) = f ( · + x ).Setting ˜ v ( t, x ) = e λt v ( x ), ˜ f ( t, x ) = e λt f ( x ), t ∈ [ − , x ∈ R d , we see that ˜ v isa bounded solution of ∂ t ˜ v − L b ˜ v = ˜ f on [ − , × B (0)according to the definition of viscosity solution given at the beginning of Section 3.1in [31]. Hence we can apply Theorem 1.1 in [31] to ˜ v . Recall that in the Silvestrenotations his s ∈ (0 ,
1) is our α/ α ∈ (0 , s ) corresponds with our α + β − v ( t, · ) ∈ C α + β ( B / (0)) and moreover k v k C α + β ( B / (0)) = k ˜ v k L ∞ ([ − / , C α + β ( B / (0)) ≤ C ( k ˜ v k L ∞ ([ − , × R d ) + k ˜ f k L ∞ ([ − , C β ( B / (0)) ) = C ( k v k + k f k C βb ( R d ) ) , where C depends only on k b k C βb ( R d ; R d ) = k b k C βb ( R d ; R d ) , α and d and is independentof λ . Thus we get that u λ ∈ C α + β ( B / ( x )) with a bound for the C α + β -norm of u λ on B / ( x ) by the quantity C ( k u λ k + k f k C βb ( R d ) ). Since C is independent on x it is clear that we have u λ ∈ C α + βb ( R d ) (cf. for instance page 434 in [24]) and thefollowing estimate holds with C = C ( k b k C βb , α, d, β ) > k u λ k C α + βb ( R d ) ≤ C ( k u λ k + k f k C βb ( R d ) ) . By Proposition 3.2 in [24] we know that λ k u λ k ≤ k f k . Hence we arrive at k u λ k C α + βb ( R d ) ≤ C k f k C βb ( R d ) , λ ≥ . (6.18) II step.
Let λ ≥
1. We show the existence of a C b -solution to λw − L b w = ˜ f when b ∈ C ∞ b ( R d ; R d ) and ˜ f ∈ C ∞ b ( R d ). 33o construct the solution we use a probabilistic method (for an alternative van-ishing viscosity method see Section 3.2 in [31]). Let ( X xt ) be the solution of dX t = b ( X t ) dt + dL t , X = x ∈ R d and consider the associated Markov semigroup ( R t ),i.e., R t l ( x ) = E [ l ( X xt )], t ≥ , x ∈ R d , l ∈ U C b ( R d ) ( U C b ( R d ) ⊂ C b ( R d ) denotesthe Banach space of all uniformly continuous and bounded functions endowed withthe sup-norm). Differentiating with respect to x under the expectation (using thederivative of X xt with respect to x , cf. [37]) it is straightforward to prove that R t g ∈ C b ( R d ), for any t ≥ g ∈ C b ( R d ). For the given ˜ f ∈ C ∞ b ( R d ) we define˜ w ( x ) = ˜ w λ ( x ) = Z ∞ e − λt R t ˜ f ( x ) dt, x ∈ R d . (6.19)It is clear that ˜ w ∈ C b ( R d ). We now show that ˜ w ∈ C b ( R d ) and solves our equation.To this purpose we first prove that for t > x ∈ R d | DR t ˜ f ( x ) | ≤ c ( α, β, k Db k ) ( t ∧ ( β − /α k ˜ f k C βb ( R d ) . (6.20)Once this estimate is proved, differentiating under the integral sign in (6.19) weobtain that w ∈ C b ( R d ) since α + β >
1. Let us fix t ∈ (0 , . By Theorem 1.1 in [37]we know in particular that k DR t g k = sup x ∈ R d | DR t g ( x ) | ≤ c ( α ) e k Db k t − /α k g k , g ∈ C b ( R d ) . Using the total variation norm as in Lemma 7.1.5 of [7] we deduce that R t l is Lips-chitz continuous for any l ∈ U C b ( R d ) and moreover | R t l ( x ) − R t l ( y ) | ≤ c ( α ) e k Db k t − /α | x − y | k l k , x, y ∈ R d . By Theorem 1.1 in [37], for any g ∈ C b ( R d ), we canwrite the directional derivative of R t g along h ∈ R d as follows: D h R t g ( x ) = E [ g ( X xt ) J ( t, x, h )] , x ∈ R d , (6.21)where J ( t, x, h ) is a suitable random variable such that ( E | J ( t, x, h ) | ) / ≤ c ( α ) e k Db k t − /α | h | , for any x ∈ R d . Let again l ∈ U C b ( R d ). Using mollifiers we canconsider an approximating sequence ( g n ) ⊂ C ∞ b ( R d ) such that k g n − l k → n → ∞ . Using (6.21) when g is replaced by g n and passing to the limit it is notdifficult to prove that R t l ∈ C b ( R d ) and moreover (6.21) holds when g is replaced by l (cf. page 480 in [23]).We have found that R t : U C b ( R d ) → C b ( R d ) is a linear and bounded operatorand | DR t l ( x ) | ≤ c ( α ) e k Db k t − /α k l k , for x ∈ R d , l ∈ U C b ( R d ) . Moreover, R t : C b ( R d ) → C b ( R d ) is linear and bounded and | DR t g ( x ) | ≤ e k Db k k Dg k , for x ∈ R d ,g ∈ C b ( R d ). To prove such estimate we fix h ∈ R d and differentiate R t g ( x ) withrespect to x along the direction h . One can show that D h E [ g ( X xt )] = E [ Dg ( X xt ) η t ] (6.22)where η t = D h X xt solves η t = h + R t Db ( X xs ) η s ds , t ≥ , P -a.s.. Note that | D h X xt | ≤| h | e k Db k t by the Gronwall lemma (cf. page 1211 in [37]).By interpolation techniques we know that (cid:0) U C b ( R d ) , C b ( R d ) (cid:1) β, ∞ = C βb ( R d ) , for β ∈ (0 ,
1) (cf. [22, Chapter 1] and the proof of Theorem 3.3 in [24]); it follows that34or any t ∈ (0 ,
1] we have that R t : C βb ( R d ) → C b ( R d ) is linear and bounded and | DR t f ( x ) | ≤ c ( α, β ) e k Db k t ( β − /α k f k C βb , for any x ∈ R d , f ∈ C βb ( R d ) . We have verified (6.20) when t ∈ (0 , t > x ∈ R d , | DR t ˜ f ( x ) | = | DR ( R t − ˜ f )( x ) |≤ c ( α ) e k Db k k R t − ˜ f k ≤ c ( α ) e k Db k k ˜ f k . Thus (6.20) holds and we know that˜ w ∈ C b ( R d ). To prove that ˜ w is a solution we first establish the identity ∂ t ( R t ˜ f )( s, x ) = R s ( L b ˜ f )( x ) = L b ( R s ˜ f )( x ) , s ≥ , x ∈ R d . (6.23)By using Ito’s formula (see [20, Section 2.3]) and taking the expectation we find E [ ˜ f ( X xs + h )] − E [ ˜ f ( X xs )] = R s + hs E [( L b ˜ f )( X xr )] dr, for h ∈ R such that s + h >
0. Itfollows that, for x ∈ R d ,∂ t ( R t ˜ f )( s, x ) = lim h → h − (cid:0) R s + h ˜ f ( x ) − R s ˜ f ( x ) (cid:1) = R s ( L b ˜ f )( x ) , s > , (6.24)and lim h → + h − (cid:0) R h ˜ f ( x ) − ˜ f ( x ) (cid:1) = L b ˜ f ( x ) . (6.25)If s > h → + R h ( R s ˜ f )( x ) − R s ˜ f ( x ) h = L b ( R s ˜ f )( x ) when ˜ f in (6.25)is replaced by R s ˜ f . By the semigroup law, the last limit and (6.24) coincide and so(6.23) holds. To check that ˜ w verifies λ ˜ w − L b ˜ w = ˜ f we use (6.20) and (6.23). Firstby the Fubini theorem we have L b ˜ w ( x ) = Z ∞ e − λt L b ( R t ˜ f )( x ) dt = Z ∞ e − λt R t ( L b ˜ f )( x ) dt. By (6.23) it follows that, for any x ∈ R d , L b ˜ w ( x ) = R ∞ e − λt ddt ( R t ˜ f ( x )) dt. Integratingby parts, we get the assertion.
III step.
Let λ ≥
1. We prove the existence of a C α + βb -solution to λw − L b w = f on R d when b ∈ C βb ( R d ; R d ) and f ∈ C βb ( R d ), α + β >
1, and show (6.17).Using convolution with mollifiers and possibly passing to subsequences (see, forinstance, page 431 in [24]) one can consider operators L b n with drifts b n ∈ C ∞ b ( R d ; R d )such that k b n k C βb ≤ k b k C βb , n ≥
1, and b n → b in C β ′ ( K ; R d ) for any compactset K ⊂ R d and β ′ ∈ (0 , β ). Similarly one can construct ( f n ) ⊂ C ∞ b ( R d ) such that k f n k C βb ≤ k f k C βb , n ≥
1, and f n → f in C β ′ ( K ) for any compact set K ⊂ R d and β ′ ∈ (0 , β ). By II step there exist C b -solutions w n to L b n w n = λw n − f n , n ≥
1. Bystep I we know that w n ∈ C α + βb ( R d ), n ≥
1, with the estimate k w n k C α + βb ( R d ) ≤ C k f k C βb ( R d ) , (6.26)( C = C ( k b k C βb , α, β, d ) is independent of λ and n ). Possibly passing to a subse-quence still denoted with ( w n ), we have that w n → w in C α + β ′ ( K ) , for any com-pact set K ⊂ R d with β ′ > < α + β ′ < α + β. Moreover, (6.26)holds with w n replaced by w. We can easily pass to the limit in each term of λw n ( x ) − L w n ( x ) − b n ( x ) · Dw n ( x ) = f n ( x ) as n → ∞ and obtain that w solvesour equation. IV step.
We prove the final assertion. 35e already know that there exists a unique solution w λ ∈ C α + βb ( R d ) and that(6.17) holds. To complete the proof we argue as in the final part of the proof of Theo-rem 6.7. By the interpolatory estimate k Dw λ k ≤ N ( α, β, d )[ Dw λ ] α + β C α + β − b k w λ k α + β − α + β ,we obtain easily that k Dw λ k < / λ ≥ λ ( d, k b k C βb , α, β ). Proof of Theorem 6.10.
As in the proof of Theorem 6.8 we verify the assumptionsof Theorem 5.1. Note that Hypothesis 2 holds since R {| x | > } | y | θ | y | d + α dy < ∞ , for any θ ∈ (0 , α ) . In order to check Hypothesis 1 we argue as in the proof of Theorem 6.8(using Theorems 6.11 and 6.6; recall that α = α ). The proof is complete. Acknowledgement.
The author would like to thank the anonymous referees fortheir useful comments and suggestions.
References [1] D. Applebaum. L´evy processes and stochastic calculus, Cambridge Studies in Ad-vanced Mathematics 93, Cambridge University Press, II edition (2009)[2] R. F. Bass and Z. Q. Chen. Systems of equations driven by stable processes, Probab.Theory Related Fields 134, 175-214 (2006)[3] J. Berestycki, L. D¨oring, L. Mytnik and L. Zambotti. Hitting properties and non-uniqueness for SDEs driven by stable processes, Stoch. Proc. Appl. 125, 918-940 (2015)[4] P. H. Bezandry and X. Fernique. Analyse de fonctions al´eatoires peu r´egulier`es sur[0 , http://arxiv.org/abs/1501.04758 [7] G. Da Prato and J. Zabczyk. Ergodicity for Infinite Dimensional Systems, LondonMathematical Society Lecture Note Series 229, Cambridge University (1996)[8] A. M. Davie. Uniqueness of solutions of stochastic differential equations, Int. Math.Res. Notices, no. 24 Art. ID rnm124, 26 pp. (2007)[9] E. Fedrizzi and F. Flandoli. H¨older flow and differentiability for SDEs with nonregulardrift, Stoch. Anal. Appl. 31, 708-736 (2013)[10] F. Flandoli. Regularizing properties of Brownian paths and a result of Davie, Stoch.Dyn. 11, 323-331 (2011)[11] I. I. Gikhman and A. V. Skorokhod. The Theory of Stochastic Processes, vol. I,Springer, Berlin (1974)[12] F. Flandoli, M. Gubinelli and E. Priola. Well-posedness of the transport equation bystochastic perturbation, Invent. Math. 180, 1-53 (2010)[13] I. Gy¨ongy and T. Martinez. On stochastic differential equations with locally unboundeddrift, Czechoslovak Math. J. (4) 51 (126) 763-783 (2001)[14] P. Imkeller and M. Scheutzow. On the spatial asymptotic behavior of stochastic flowsin Euclidean space, Ann. Probab. 27, 109-129 (1999)
15] O. Kallenberg. Foundations of modern probability. Probability and its Applications.Springer-Verlag, New York, II edition (2002)[16] L. A. Khan. Separability in Function Spaces, J. Math. Anal. Appl. 113, 88-92 (1986)[17] N. V. Krylov. Introduction to the theory of random processes. Graduate Studies inMathematics 43, AMS, Providence (2002)[18] N. V. Krylov and M. R¨ockner. Strong solutions to stochastic equations with singulartime dependent drift, Probab. Theory Relat. Fields 131, 154-196 (2005)[19] H. Kunita. Stochastic differential equations and stochastic flows of diffeomorphisms.Ecole d’ete de probabilit´es de Saint-Flour, XII-1982, 143-303, Lecture Notes in Math.1097, Springer, Berlin (1984)[20] H. Kunita. Stochastic differential equations based on L´evy processes and stochasticflows of diffeomorphisms, Real and stochastic analysis, Trends Math., pp. 305-373.Birkh¨auser Boston (2004)[21] J. M. Leahy and R. Mikulevicius. On Some Properties of Space Inverses of StochasticFlows, Stoch. Partial Differ. Equ. Anal. Comput. 3, 445-478 (2015)[22] A. Lunardi. Interpolation theory. Second edition, Lecture Notes, Scuola Normale Su-periore di Pisa, Edizioni della Normale, Pisa (2009)[23] E. Priola and J. Zabczyk. Liouville theorems for non-local operators, J. Funct. Anal.216, 455-490 (2004)[24] E. Priola. Pathwise uniqueness for singular SDEs driven by stable processes, OsakaJournal of Mathematics 49, 421-447 (2012)[25] E. Priola. Stochastic flow for SDEs with jumps and irregular drift term, Banach CenterPublications vol. 105, 193-210 (2015)[26] H. L. Royden. Real Analysis, III edition, Macmillan Publishing Company, New York(1988)[27] M. Ryznar. Estimate of Green function for relativistic α -stable processes, PotentialAnalysis 17, 1-23 (2002)[28] K. I. Sato. L´evy processes and infinite divisible distributions, Cambridge UniversityPress, Cambridge (1999)[29] R. L. Schilling, P. Sztonyk and J. Wang. Coupling property and gradient estimates ofL´evy processes via the symbol, Bernoulli 18, 1128-1149 (2012)[30] A. V. Shaposhnikov. Some remarks on Davie’s uniqueness theorem (2014, preprint). http://arxiv.org/abs/1401.5455 [31] L. Silvestre. On the differentiability of the solution to an equation with drift andfractional diffusion, Indiana Univ. Math. J. 61, 557-584 (2012)[32] L. Silvestre, V. Vicol and A. Zlatos. On the loss of continuity for super-critical drift-diffusion equations, Arch. Ration. Mech. Anal. 207, 845-877 (2013)[33] R. Situ. Theory of stochastic differential equations with jumps and applications. Math-ematical and analytical techniques with applications to engineering, Springer (2005)[34] H. Tanaka, M. Tsuchiya and S. Watanabe. Perturbation of drift-type for L´evy pro-cesses, J. Math. Kyoto Univ. 14, 73-92 (1974)[35] A. J. Veretennikov. Strong solutions and explicit formulas for solutions of stochasticintegral equations, Mat. Sb., (N.S.) 111 (153) 434-452 (1980)[36] K. Yosida. Functional Analysis, VI edition, Springer-Verlag, Berlin (1980)
37] X. Zhang. Derivative formulas and gradient estimates for SDEs driven by α -stableprocesses, Stoch. Proc. Appl. 123, 1213-1228 (2013)[38] X. Zhang. Stochastic differential equations with Sobolev drifts and driven by α -stableprocesses, Ann. Inst. H. Poincar´e Probab. Statist. 49, 1057-1079 (2013)-stableprocesses, Ann. Inst. H. Poincar´e Probab. Statist. 49, 1057-1079 (2013)