aa r X i v : . [ m a t h . P R ] O c t Stochastic flows related to Walsh Brownian motion
Hatem HajriD´epartement de Math´ematiquesUniversit´e Paris-Sud 11, 91405 Orsay, [email protected]
Abstract
We define an equation on a simple graph which is an extension of Tanaka’s equationand the skew Brownian motion equation. We then apply the theory of transition kernelsdeveloped by Le Jan and Raimond and show that all the solutions can be classified byprobability measures.
Key words : Stochastic flows of kernels, Skew Brownian motion, Walsh Brownian motion.
AMS 2000 Subject Classification : Primary 60H25; Secondary 60J60.Submitted to EJP on January 17, 2011, final version accepted July 18, 2011.
In [10], [11] Le Jan and Raimond have extended the classical theory of stochastic flows toinclude flows of probability kernels. Using the Wiener chaos decomposition, it was shown thatnon Lipschitzian stochastic differential equations have a unique Wiener measurable solutiongiven by random kernels. Later, the theory was applied in [12] to the study of Tanaka’sequation: ϕ s,t ( x ) = x + Z ts sgn( ϕ s,u ( x )) W ( du ) , s ≤ t, x ∈ R , (1)where sgn( x ) = 1 { x> } − { x ≤ } , W t = W ,t { t> } − W t, { t ≤ } and ( W s,t , s ≤ t ) is a real whitenoise (see Definition 1.10 [11]) on a probability space (Ω , A , P ). If K is a stochastic flow of1ernels (see Definition 3 below) and W is a real white noise, then by definition, ( K, W ) is asolution of Tanaka’s SDE if for all s ≤ t, x ∈ R , f ∈ C b ( R ) ( f is C on R and f ′ , f ′′ arebounded) K s,t f ( x ) = f ( x ) + Z ts K s,u ( f ′ sgn)( x ) W ( du ) + 12 Z ts K s,u f ′′ ( x ) du a.s. (2)It has been proved [12] that each solution flow of (2) can be characterized by a probabilitymeasure on [0 ,
1] which entirely determines its law. Define τ s ( x ) = inf { r ≥ s : W s,r = −| x |} , s, x ∈ R . Then, the unique F W adapted solution (Wiener flow) of (2) is given by K Ws,t ( x ) = δ x +sgn( x ) W s,t { t ≤ τ s ( x ) } + 12 ( δ W + s,t + δ − W + s,t )1 { t>τ s ( x ) } , W + s,t := W s,t − inf u ∈ [ s,t ] W s,u . Among solutions of (2), there is only one flow of mappings (see Definition 4 below) which hasbeen already studied in [18].We now fix α ∈ ]0 ,
1[ and consider the following SDE having a less obvious extension to kernels: X s,xt = x + W s,t + (2 α −
1) ˜ L xs,t , t ≥ s, x ∈ R , (3)where ˜ L xs,t = lim ε → + ε Z ts {| X s,xu |≤ ε } du (The symmetric local time) . Equation (3) was introduced in [8]. For a fixed initial condition, it has a pathwise unique solu-tion which is distributed as the skew Brownian motion (
SBM ) with parameter α ( SBM ( α )).It was shown in [1] that when α = , flows associated to (3) are coalescing and a deeper studyof (3) was provided later in [3] and [4]. Now, consider the following generalization of (1): X s,t ( x ) = x + Z ts sgn( X s,u ( x )) W ( du ) + (2 α −
1) ˜ L xs,t ( X ) , s ≤ t, x ∈ R , (4)where ˜ L xs,t ( X ) = lim ε → + ε Z ts {| X s,u ( x ) |≤ ε } du. Each solution of (4) is distributed as the
SBM ( α ). By Tanaka’s formula for symmetric localtime ([15] page 234), | X s,t ( x ) | = | x | + Z ts f sgn( X s,u ( x )) dX s,u ( x ) + ˜ L xs,t ( X ) , f sgn( x ) = 1 { x> } − { x< } . By combining the last identity with (4), we have | X s,t ( x ) | = | x | + W s,t + ˜ L xs,t ( X ) . (5)The uniqueness of solutions of the Skorokhod equation ([15] page 239) entails that | X s,t ( x ) | = | x | + W s,t − min s ≤ u ≤ t [( | x | + W s,u ) ∧ . (6)Clearly (5) and (6) imply that σ ( | X s,u ( x ) | ; s ≤ u ≤ t ) = σ ( W s,u ; s ≤ u ≤ t ) which is strictlysmaller than σ ( X s,u ( x ); s ≤ u ≤ t ) and so X s, · ( x ) cannot be a strong solution of (4). For thesereasons, we call (4) Tanaka’s SDE related to SBM ( α ).From now on, for any metric space E , C ( E ) (respectively C b ( E )) will denote the space of allcontinuous (respectively bounded continuous) R -valued functions on E . Let • C b ( R ∗ ) = { f ∈ C ( R ) : f is twice derivable on R ∗ , f ′ , f ′′ ∈ C b ( R ∗ ) , f ′| ]0 , + ∞ [ , f ′′| ]0 , + ∞ [ (resp. f ′| ] −∞ , , f ′′| ] −∞ , ) have right (resp. left) limit in 0 } . • D α = { f ∈ C b ( R ∗ ) : αf ′ (0+) = (1 − α ) f ′ (0 − ) } .For f ∈ D α , we set by convention f ′ (0) = f ′ (0 − ) , f ′′ (0) = f ′′ (0 − ). By Itˆo-Tanaka’s formula([13] page 432) or Freidlin-Sheu formula (see Lemma 2.3 [5] or Theorem 3 in Section 2) andProposition 3 below, both extensions to kernels of (3) and (4) may be defined by K s,t f ( x ) = f ( x ) + Z ts K s,u ( εf ′ )( x ) W ( du ) + 12 Z ts K s,u f ′′ ( x ) du, f ∈ D α , (7)where ε ( x ) = 1 (respectively ε ( x ) = sgn( x )) in the first (respectively second) case, but due tothe pathwise uniqueness of (3), the unique solution of (7) when ε ( x ) = 1, is K s,t ( x ) = δ X s,xt (this can be justified by the weak domination relation, see (24)). Our aim now is to define anextention of (7) related to Walsh Brownian motion in general. The latter process was introducedin [17] and will be recalled in the coming section. We begin by defining our graph. Definition 1. (Graph G)Fix N ≥ and α , · · · , α N > such that N X i =1 α i = 1 .In the sequel G will denote the graph below (Figure 1) consisting of N half lines ( D i ) ≤ i ≤ N manating from . Let ~e i be a vector of modulus such that D i = { h~e i , h > } and define forall function f : G −→ R and i ∈ [1 , N ] , the mappings : f i : R + −→ R h f ( h~e i ) Define the following distance on G : d ( h~e i , h ′ ~e j ) = h + h ′ if i = j, ( h, h ′ ) ∈ R , | h − h ′ | if i = j, ( h, h ′ ) ∈ R . For x ∈ G , we will use the simplified notation | x | := d ( x, .We equip G with its Borel σ -field B ( G ) and use the notation G ∗ = G \ { } . Now, define • C b ( G ∗ ) = { f ∈ C ( G ) : ∀ i ∈ [1 , N ] , f i is twice derivable on R ∗ + , f ′ i , f ′′ i ∈ C b ( R ∗ + ) and bothhave finite limits at } . • D ( α , · · · , α N ) = { f ∈ C b ( G ∗ ) : N X i =1 α i f ′ i (0+) = 0 } .For all x ∈ G , we define ~e ( x ) = ~e i if x ∈ D i , x = 0 (convention ~e (0) = ~e N ). For f ∈ C b ( G ∗ ) , x = 0 , let f ′ ( x ) be the derivative of f at x relatively to ~e ( x ) ( = f ′ i ( | x | ) if x ∈ D i ) and f ′′ ( x ) =( f ′ ) ′ ( x ) ( = f ′′ i ( | x | ) if x ∈ D i ). We use the conventions f ′ (0) = f ′ N (0+) , f ′′ (0) = f ′′ N (0+) . Now,associate to each ray D i a sign ε i ∈ {− , } and then define ε ( x ) = ε i if x ∈ D i , x = 0 ε N if x = 0 To simplify, we suppose that ε = · · · = ε p = 1 , ε p +1 = · · · = ε N = − for some p ≤ N . Set G + = [ ≤ i ≤ p D i , G − = [ p +1 ≤ i ≤ N D i . Then G = G + [ G − . We also put α + = 1 − α − := P pi =1 α i . Remark 1.
Our graph can be simply defined as N pieces of R + in which the N origins areidentified. The values of the ~e i will not have any effect in the sequel. Definition 2. (Equation ( E )).On a probability space (Ω , A , P ) , let W be a real white noise and K be a stochastic flow of O ~e i ( D i , α i ) G + G − ε ( x ) = − ε ( x ) = 1 Figure 1: Graph G . kernels on G (a precise definition will be given in Section 2). We say that ( K, W ) solves ( E ) if for all s ≤ t, f ∈ D ( α , · · · , α N ) , x ∈ G , K s,t f ( x ) = f ( x ) + Z ts K s,u ( εf ′ )( x ) W ( du ) + 12 Z ts K s,u f ′′ ( x ) du a.s. If K = δ ϕ is a solution of ( E ) , we simply say that ( ϕ, W ) solves ( E ) . Remarks 1. (1) If ( K, W ) solves ( E ) , then σ ( W ) ⊂ σ ( K ) (see Corollary 2) below. So, onecan simply say that K solves ( E ) .(2) The case N = 2 , p = 2 , ε = ε = 1 (Figure 2) corresponds to Tanaka’s SDE related to SBM and includes in particular the usual Tanaka’s SDE [12]. In fact, let ( K R , W ) be a solution of(7) with α = α , ε ( y ) = sgn ( y ) and define ψ ( y ) = | y | ( ~e y ≥ + ~e y< ) , y ∈ R . For all x ∈ G ,define K Gs,t ( x ) = ψ ( K R s,t ( y )) with y = ψ − ( x ) . Let f ∈ D ( α , α ) , x ∈ G and g be defined on R by g ( z ) = f ( ψ ( z )) ( g ∈ D α ) . Since K R satisfies(7) in ( g, ψ − ( x )) ( g is the test function and ψ − ( x ) is the starting point), it easily comes that K G satisfies ( E ) in ( f, x ) . Similarly, if K G solves ( E ) , then K R solves (7). O ++ Figure 2: Tanaka’s SDE.5
3) As in (2), the case N = 2 , p = 1 , ε = 1 , ε = − (Figure 3) corresponds to (3). O +− Figure 3: SBM equation.In this paper, we classify all solutions of ( E ) by means of probability measures. We now statethe first Theorem 1.
Let W be a real white noise and X s,xt be the flow associated to (3) with α = α + .Define Z s,t ( x ) = X s,ε ( x ) | x | t , s ≤ t, x ∈ G and K Ws,t ( x ) = δ x + ~e ( x ) ε ( x ) W s,t { t ≤ τ s,x } + (cid:0) p X i =1 α i α + δ ~e i | Z s,t ( x ) | { Z s,t ( x ) > } + N X i = p +1 α i α − δ ~e i | Z s,t ( x ) | { Z s,t ( x ) ≤ } (cid:1) { t>τ s,x } , where τ s,x = inf { r ≥ s : x + ~e ( x ) ε ( x ) W s,r = 0 } . Then, K W is the unique Wiener solution of ( E ) . This means that K W solves ( E ) and if K is another Wiener solution of ( E ) , then for all s ≤ t, x ∈ G , K Ws,t ( x ) = K s,t ( x ) a.s. The proof of this theorem follows [10] (see also [14] for more details) with some modificationsadapted to our case. We will use Freidlin-Sheu formula for Walsh Brownian motion to checkthat K W solves ( E ). Unicity will be justified by means of the Wiener chaos decomposition(Proposition 8). Besides the Wiener flow, there are also other weak solutions associated to ( E )which are fully described by the following 6 heorem 2. (1) Define ∆ k = (cid:8) u = ( u , · · · , u k ) ∈ [0 , k : k X i =1 u i = 1 (cid:9) , k ≥ . Suppose α + = .(a) Let m + and m − be two probability measures respectively on ∆ p and ∆ N − p satisfying :(+) Z ∆ p u i m + ( du ) = α i α + , ∀ ≤ i ≤ p ,(-) Z ∆ N − p u j m − ( du ) = α j + p α − , ∀ ≤ j ≤ N − p. Then, to ( m + , m − ) is associated a stochastic flow of kernels K m + ,m − solution of ( E ) . • To ( δ ( α α + , ··· , αpα + ) , δ ( αp +1 α − , ··· , αNα − ) ) is associated a Wiener solution K W . • To ( p X i =1 α i α + δ ,.., , , ,.., , N X i = p +1 α i α − δ ,.., , , ,.., ) is associated a coalescing stochastic flow ofmappings ϕ .(b) For all stochastic flow of kernels K solution of ( E ) there exists a unique pair of measures ( m + , m − ) satisfying conditions (+) and ( − ) such that K law = K m + ,m − .(2) If α + = , N > , there is just one solution of ( E ) which is a Wiener solution. Remarks 2. (1) If α + = 1 , solutions of ( E ) are characterized by a unique measure m + satis-fying condition (+) instead of a pair ( m + , m − ) and a similar remark applies if α − = 1 .(2) The case α + = , N = 2 does not appear in the last theorem since it corresponds to dX t = W ( dt ) . This paper follows ideas of [12] in a more general context and is organized as follows.In Section 2, we remind basic definitions of stochastic flows and Walsh Brownian motion.In Section 3, we use a “specific”
SBM ( α + ) flow introduced by Burdzy-Kaspi and excursiontheory to construct all solutions of ( E ). Unicity of solutions is proved in Section 4. Section5 is an appendix devoted to Freidlin-Sheu formula stated in [5] for a general class of diffusionprocesses defined by means of their generators. Here we first establish this formula using simplearguments and then deduce the characterization of Walsh Brownian motion by means of itsgenerator (Proposition 3). 7 Stochastic flows and Walsh Brownian motion.
Let P ( G ) be the space of probability measures on G and ( f n ) n ∈ N be a sequence of functionsdense in { f ∈ C ( G ) , || f || ∞ ≤ } with C ( G ) being the space of continuous functions on G whichvanish at infinity. We equip P ( G ) with the distance d ( µ, ν ) = ( P n − n ( R f n dµ − R f n dν ) ) for all µ and ν in P ( G ). Thus, P ( G ) is a locally compact separable metric space. Let usrecall that a kernel K on G is a measurable mapping from G into P ( G ). We denote by E the space of all kernels on G and we equip E with the σ -field E generated by the mappings K µK, µ ∈ P ( G ), with µK the probability measure defined as µK ( A ) = R G K ( x, A ) µ ( dx )for every µ ∈ P ( G ). Let us recall some fundamental definitions from [11]. Let (Ω , A , P ) be a probability space. Definition 3. (Stochastic flow of kernels.) A family of ( E, E ) -valued random variables ( K s,t ) s ≤ t is called a stochastic flow of kernels on G if, ∀ s t the mapping K s,t : ( G × Ω , B ( G ) ⊗ A ) −→ ( P ( G ) , B ( P ( G )))( x, ω ) K s,t ( x, ω ) is measurable and if it satisfies the following properties:1. ∀ s < t < u, x ∈ G a.s. ∀ f ∈ C ( G ) , K s,u f ( x ) = K s,t ( K t,u f )( x ) (flow property).2. ∀ s t the law of K s,t only depends on t − s .3. For all t < t < · · · < t n , the family { K t i ,t i +1 , ≤ i ≤ n − } is independent.4. ∀ t ≥ , x ∈ G, f ∈ C ( G ) , lim y → x E [( K ,t f ( x ) − K ,t f ( y )) ] = 0 .5. ∀ t ≥ , f ∈ C ( G ) , lim x → + ∞ E [( K ,t f ( x )) ] = 0 . ∀ x ∈ G, f ∈ C ( G ) , lim t → E [( K ,t f ( x ) − f ( x )) ] = 0 . efinition 4. (Stochastic flow of mappings.) A family ( ϕ s,t ) s ≤ t of measurable mappings from G into G is called a stochastic flow of mappings on G if K s,t ( x ) := δ ϕ s,t ( x ) is a stochastic flowof kernels on G . Remark 2.
Let K be a stochastic flow of kernels on G and set P nt = E [ K ⊗ n ,t ] , n ≥ . Then, ( P n ) n ≥ is a compatible family of Feller semigroups acting respectively on C ( G n ) (see Propo-sition 2.2 [11]). Recall that for all f ∈ C ( G ), f i is defined on R + . From now on, we extend this definition on R by setting f i = 0 on ] − ∞ , W ( α , · · · , α N ),by giving its transition density as defined in [2]. On C ( G ), consider P t f ( h~e j ) = 2 N X i =1 α i p t f i ( − h ) + p t f j ( h ) − p t f j ( − h ) , h > , P t f (0) = 2 N X i =1 α i p t f i (0) . where ( p t ) t> is the heat kernel of the standard one dimensional Brownian motion. Then ( P t ) t ≥ is a Feller semigroup on C ( G ). A strong Markov process Z with state space G and semigroup P t , and such that Z is c`adl`ag is by definition the Walsh Brownian motion W ( α , · · · , α N ) on G . For all n ≥
0, let D n = { k n , k ∈ N } and D = ∪ n ∈ N D n . For 0 ≤ u < v , define n ( u, v ) = inf { n ∈ N : D n ∩ ] u, v [ = ∅} and f ( u, v ) = inf D n ( u,v ) ∩ ] u, v [.Let B be a standard Brownian motion defined on (Ω , A , P ) and ( ~γ r , r ∈ D ) be a sequence ofindependent random variables with the same law N X i =1 α i δ ~e i which is also independent of B . Wedefine B + t = B t − min u ∈ [0 ,t ] B u , g t = sup { r ≤ t : B + r = 0 } , d t = inf { r ≥ t : B + r = 0 } , and finally Z t = ~γ r B + t , r = f ( g t , d t ) if B + t > , Z t = 0 if B + t = 0. Then we have the following Proposition 1. ( Z t , t ≥ is an W ( α , · · · , α N ) on G started at 0. roof. We use these notations min s,t = min u ∈ [ s,t ] B u , ~e ,t = ~e ( Z t ) , F s = σ ( ~e ,u , B u ; 0 ≤ u ≤ s ) . Fix 0 ≤ s < t and denote by E s,t = { min ,s = min ,t } (= { g t ≤ s } a.s. ). Let h : G −→ R be abounded measurable function. Then E [ h ( Z t ) |F s ] = E [ h ( Z t )1 E s,t |F s ] + E [ h ( Z t )1 E cs,t |F s ] , with E [ h ( Z t )1 E cs,t |F s ] = N X i =1 E [ h i ( B + t )1 { g t >s,~e ,t = ~e i } |F s ] = N X i =1 α i E [ h i ( B + t )1 { g t >s } |F s ] . If B s,r = B r − B s , then the density of ( min r ∈ [ s,t ] B s,r , B s,t ) with respect to the Lebesgue measureis given by g ( x, y ) = 2 p π ( t − s ) ( − x + y ) exp( − ( − x + y ) t − s ) )1 { y>x,x< } ([7] page 28) . Since ( B s,r , r ≥ s ) is independent of F s , we get E [ h i ( B + t )1 { g t >s } |F s ] = E [ h i ( B s,t − min r ∈ [ s,t ] B s,r )1 { min r ∈ [0 ,s ] B s,r > min r ∈ [ s,t ] B s,r } |F s ]= Z R {− B + s >x } ( Z R h i ( y − x ) g ( x, y ) dy ) dx = 2 Z R + h i ( u ) p t − s ( B + s , − u ) du ( u = y − x )and so E [ h ( Z t )1 E cs,t |F s ] = 2 N X i =1 α i p t − s h i ( − B + s ). On the other hand E [ h ( Z t )1 E s,t |F s ] = E [ h ( ~e ,s ( B t − min ,s ))1 E s,t ∩ ( B t >min ,s ) |F s ]= E [ h ( ~e ,s ( B t − min ,s ))1 { B t >min ,s } |F s ] − E [ h ( ~e ,s ( B t − min ,s ))1 E cs,t ∩ ( B t >min ,s ) |F s ] . Obviously on { ~e ,s = ~e k } , we have E [ h ( ~e ,s ( B t − min ,s ))1 { B t >min ,s } |F s ] = E [ h k ( B s,t + B + s )1 { B s,t + B + s > } |F s ]= p t − s h k ( B + s )10nd E [ h ( ~e ,s ( B t − min ,s ))1 E cs,t ∩ ( B t >min ,s ) |F s ] = E [ h k ( B s,t + B + s )1 {− B + s > min r ∈ [ s,t ] B s,r ,B s,t + B + s > } |F s ]= Z R h k ( y + B + s )1 { y + B + s > } (cid:18)Z R {− B + s >x } g ( x, y ) dx (cid:19) dy = p t − s h k ( − B + s ) . As a result, E [ h ( Z t ) |F s ] = P t − s h ( Z s ) where P is the semigroup of W ( α , · · · , α N ). Proposition 2.
Let M = ( M n ) n ≥ be a Markov chain on G started at with stochastic matrix Q given by Q (0 , ~e i ) = α i , Q ( n~e i , ( n + 1) ~e i ) = Q ( n~e i , ( n − ~e i ) = 12 ∀ i ∈ [1 , N ] , n ≥ . (8) Then, for all ≤ t < · · · < t p , we have ( 12 n M ⌊ n t ⌋ , · · · , n M ⌊ n t p ⌋ ) law −−−−−→ n → + ∞ ( Z t , · · · , Z t p ) , where Z is an W ( α , · · · , α N ) started at .Proof. Let B be a standard Brownian motion and define for all n ≥ T n ( B ) = T n ( | B | ) = 0and for k ≥ T nk +1 ( B ) = inf { r ≥ T nk ( B ) , | B r − B T nk | = 12 n } ,T nk +1 ( | B | ) = inf { r ≥ T nk ( | B | ) , || B r | − | B T nk || = 12 n } . Then, clearly T nk ( B ) = T nk ( | B | ) and so ( T nk ( | B | )) k ≥ law = ( T nk ( B )) k ≥ . It is known ([6] page31) that lim n → + ∞ T n ⌊ n t ⌋ ( B ) = t a.s. uniformly on compact sets. Then, the result holds also for T n ⌊ n t ⌋ ( | B | ). Now, let Z be the W ( α , · · · , α N ) started at 0 constructed in the beginning ofthis section from the reflected Brownian motion B + . Let T nk = T nk ( B + ) (defined analogously to T nk ( | B | )) and Z nk = 2 n Z T nk . Then obviously ( Z nk , k ≥ law = M for all n ≥
0. Since a.s. t −→ Z t is continuous, it comes that a.s. ∀ t ≥ , lim n → + ∞ n Z n ⌊ n t ⌋ = Z t .11 .2.2 Freidlin-Sheu formula.Theorem 3. [5] Let ( Z t ) t ≥ be a W ( α , · · · , α N ) on G started at z and let X t = | Z t | . Then(i) ( X t ) t ≥ is a reflecting Brownian motion started at | z | .(ii) B t = X t − ˜ L t ( X ) − | z | is a standard Brownian motion where ˜ L t ( X ) = lim ε → + ε Z t {| X u |≤ ε } du. (iii) ∀ f ∈ C b ( G ∗ ) , f ( Z t ) = f ( z ) + Z t f ′ ( Z s ) dB s + 12 Z t f ′′ ( Z s ) ds + ( N X i =1 α i f ′ i (0+)) ˜ L t ( X ) . (9) Remarks 3. (1) By taking f ( z ) = | z | and applying Skorokhod lemma, we find the followinganalogous of (6), | Z t | = | z | + B t − min s ≤ u ≤ t [( | z | + B u ) ∧ . From this observation, when ε i = 1 for all i ∈ [1 , N ] , we call ( E ) , Tanaka’s SDE related to W ( α , · · · , α N ) .(2) For N ≥ , the filtration ( F Zt ) has the martingale representation property with respect to B [2], but there is no Brownian motion W such that F Zt = F Wt [16]. Using this theorem, we obtain the following characterization of W ( α , · · · , α N ) by means of itssemigroup. Proposition 3.
Let • D ( α , · · · , α N ) = { f ∈ C b ( G ∗ ) : N X i =1 α i f ′ i (0+) = 0 } . • Q = ( Q t ) t ≥ be a Feller semigroup satisfying Q t f ( x ) = f ( x ) + 12 Z t Q u f ′′ ( x ) du ∀ f ∈ D ( α , · · · , α N ) . Then, Q is the semigroup of W ( α , · · · , α N ) .Proof. Denote by P the semigroup of W ( α , · · · , α N ), A ′ and D ( A ′ ) being respectively itsgenerator and its domain on C ( G ). If D ′ ( α , · · · , α N ) = { f ∈ C ( G ) \ D ( α , · · · , α N ) , f ′′ ∈ C ( G ) } , (10)12hen it is enough to prove these statements:(i) ∀ t > , P t ( C ( G )) ⊂ D ′ ( α , · · · , α N ).(ii) D ′ ( α , · · · , α N ) ⊂ D ( A ′ ) and A ′ f ( x ) = f ′′ ( x ) on D ′ ( α , · · · , α N ).(iii) D ′ ( α , · · · , α N ) is dense in C ( G ) for || . || ∞ .(iv) If R and R ′ are respectively the resolvents of Q and P , then R λ = R ′ λ ∀ λ > D ′ ( α , · · · , α N ) . The proof of (i) is based on elementary calculations using dominated convergence, (ii) comesfrom (9), (iii) is a consequence of (i) and the Feller property of P (approximate f by P n f ).To prove (iv), let A be the generator of Q and fix f ∈ D ′ ( α , · · · , α N ). Then, R λ f is theunique element of D ( A ) such that ( λI − A )( R λ f ) = f . We have R ′ λ f ∈ D ′ ( α , · · · , α N )by (i), D ′ ( α , · · · , α N ) ⊂ D ( A ) by hypothesis. Hence R ′ λ f ∈ D ( A ) and since A = A ′ on D ′ ( α , · · · , α N ), we deduce that R λ f = R ′ λ f . ( E ) . In this section, we prove ( a ) of Theorem 2 and show that K W given in Theorem 1 solves ( E ). We are looking for flows associated to the SDE (3). The flow associated to
SBM (1) whichsolves (3) is the reflected Brownian motion above 0 given by Y s,t ( x ) = ( x + W s,t )1 { t ≤ τ s,x } + ( W s,t − inf u ∈ [ τ s,x ,t ] W s,u )1 { t>τ s,x } , where τ s,x = inf { r ≥ s : x + W s,r = 0 } . (11)and a similar expression holds for the SBM (0) which is the reflected Brownian motion below0. These flows satisfy all properties of the
SBM ( α ) , α ∈ ]0 ,
1[ we will mention below such thatthe “strong”flow property (Proposition 4) and the strong comparison principle (12). When13 ∈ ]0 , SBM ( α + ) and sowe suppose in this paragraph that α + / ∈ { , } .With probability 1, for all rationals s and x simultaneously, equation (3) has a unique strongsolution with α = α + . Define Y s,t ( x ) = inf X u,ytu,y ∈ Q u
It is known, since pathwise uniqueness holds for the SDE (3), that for a fixed s ≤ t ≤ u, x ∈ R , we have Y s,u ( x ) = Y t,u ( Y s,t ( x )) a.s. ([9] page 161). Now, using the regularity of theflow, the result extends clearly as desired.To conclude that Y is a stochastic flow of mappings, it remains to show the following Lemma 1. ∀ t ≥ s , x ∈ R , f ∈ C ( R )lim y → x E [( f ( Y s,t ( x )) − f ( Y s,t ( y ))) ] = 0 . Proof.
We take s = 0. For g ∈ C ( R ), set P (2) t g ( x ) = E [ g ( Y ,t ( x ) , Y ,t ( x ))] , x = ( x , x ) . ε > , f ε ( x, y ) = 1 {| x − y |≥ ε } , then by Theorem 10 in [13], P (2) t f ε ( x, y ) −−−→ y → x f ∈ C ( R ), we have E [( f ( Y ,t ( x )) − f ( Y ,t ( y ))) ] = P (2) t f ⊗ ( x, x ) + P (2) t f ⊗ ( y, y ) − P (2) t f ⊗ ( x, y ) . To conclude the lemma, we need only to check thatlim y → x P (2) t f ( y ) = P (2) t f ( x ) , ∀ x ∈ R , f ∈ C ( R ) . Let f = f ⊗ f with f i ∈ C ( R ), x = ( x , x ) , y = ( y , y ) ∈ R . Then | P (2) t f ( y ) − P (2) t f ( x ) | ≤ M X k =1 P (2) t ( | ⊗ f k − f k ⊗ | )( y k , x k ) , where M > α > ∃ ε > | u − v | < ε ⇒ ∀ ≤ k ≤ | f k ( u ) − f k ( v ) | <α . As a result | P (2) t f ( y ) − P (2) t f ( x ) | ≤ M α + 2 M X k =1 || f k || ∞ P (2) t f ε ( x k , y k ) , and we arrive at lim sup y → x | P (2) t f ( y ) − P (2) t f ( x ) | ≤ M α for all α > y → x P (2) t f ( y ) = P (2) t f ( x ). Now this easily extends by a density argument for all f ∈ C ( R ).In the coming section, we present some properties related to the coalescence of Y we willrequire in Section 3.2 to construct solutions of ( E ). In this section, we suppose < α + <
1. The analysis of the case 0 < α + < requires anapplication of symmetry. Define T x,y = inf { r ≥ , Y ,r ( x ) = Y ,r ( y ) } , x, y ∈ R . By the fundamental result of [1], T x,y < ∞ a.s. for all x, y ∈ R . Due to the local time,coalescence always occurs in 0; Y ,r ( x ) = Y ,r ( y ) = 0 if r = T x,y . Recall the definition of τ s,x from (11). Then T x,y > sup( τ ,x , τ ,y ) a.s. ([1] page 203). Set L xt = x + (2 α + − L ,t ( x ) , U ( x, y ) = inf { z ≥ y : L xt = L yt = z for some t ≥ } , y ≥ x. λ > ∀ u ≥ y > , P ( U (0 , y ) ≤ u ) = (1 − yu ) λ . Thus for a fixed 0 < γ <
1, we get lim y → P ( U (0 , y ) ≤ y γ ) = lim y → (1 − y − γ ) λ = 1.From Theorem 1.1 [3], we have U ( x, y ) − x law = U (0 , y − x ) for all 0 < x < y and solim y → x + P ( U ( x, y ) − x ≤ ( y − x ) γ ) = 1 , ∀ x ≥ . (13) Lemma 2.
For all x ∈ R , we have lim y → x T x,y = τ ,x in probability.Proof. In this proof we denote Y ,t (0) simply by Y t . We first establish the result for x = 0. Forall t >
0, we have P ( t ≤ T ,y ) ≤ P ( L ,t (0) ≤ L ,T ,y (0)) = P ( L t ≤ U (0 , y ))since (2 α + − L ,T ,y (0) = U (0 , y ). The right-hand side converges to 0 as y →
0+ by (13). Onthe other hand, by the strong Markov property at time τ ,y for y < G t ( y ) := P ( t ≤ T ,y ) = P ( t ≤ τ ,y ) + E [1 { t>τ ,y } G t − τ ,y ( Y τ y )] . For all ǫ > E [1 { t>τ ,y } G t − τ ,y ( Y τ ,y )] = E [1 { t − τ ,y >ǫ } G t − τ ,y ( Y τ ,y )] + E [1 {
0, we get lim sup y → − G t ( y ) = 0 as desired for x = 0. Now, the lemma easily holds afterremarking that T x,y − τ ,x law = T ,y − x if 0 ≤ x < y and T x,y − τ ,x law = T ,x − y if x < y ≤ . s t, x ∈ R , define g s,t ( x ) = sup { u ∈ [ s, t ] : Y s,u ( x ) = 0 } (sup( ∅ ) = −∞ ) . (14)We use Lemma 2 to prove Lemma 3.
Fix s, x ∈ R . Then, a.s. for all t > τ s,x , there exists ( v, y ) ∈ Q such that v < g s,t ( x ) and Y s,r ( x ) = Y v,r ( y ) ∀ r ≥ g s,t ( x ) . Proof.
We prove the result for s = 0 and first for x = 0. Let t >
0, then for all ǫ > P ( ∃ η > Y ,t ( η ) = Y ,t ( − η )) ≥ P ( T − ǫ,ǫ ≤ t ) . From P ( t < T − ǫ,ǫ ) ≤ P ( t < T ,ǫ ) + P ( t < T , − ǫ ) and the previous lemma, we have lim ǫ → P ( t
0, such that Y ,t ( ǫ ) = Y ,t ( − ǫ ) and let v ∈ ]0 , T − ǫ,ǫ [ ∩ Q . Then Y ,v ( ǫ ) > Y ,v ( − ǫ ) and for any rational y ∈ ] Y ,v ( − ǫ ) , Y ,v ( ǫ )[, we have by (12) Y v,u ( Y ,v ( − ǫ )) ≤ Y v,u ( y ) ≤ Y v,u ( Y ,v ( ǫ )) , ∀ u ≥ v. The flow property (Proposition 4) yields Y ,u ( − ǫ ) ≤ Y v,u ( y ) ≤ Y ,u ( ǫ ) , ∀ u ≥ v . So necessarily Y ,r (0) = Y v,r ( y ) , ∀ r ≥ g ,t (0). For x > ǫ small enough, we have P ( Y ,t ( x + ǫ ) > Y ,t ( x ) , t > τ ,x ) ≤ P ( τ ,x < t < T x,x + ǫ ) . This shows that lim ǫ → P ( Y ,t ( x + ǫ ) > Y ,t ( x ) | t > τ ,x ) = 0 by Lemma 2. Similarly, for ǫ small P ( Y ,t ( x − ǫ ) < Y ,t ( x ) , t > τ ,x ) ≤ P ( τ ,x < t < T x − ǫ,x ) . The right-hand side converges to 0 as ǫ → ǫ → P ( Y ,t ( x ) > Y ,t ( x − ǫ ) | t > τ ,x ) = 0. Since { Y ,t ( x + ǫ ) > Y ,t ( x − ǫ ) } ⊂ { Y ,t ( x + ǫ ) > Y ,t ( x ) } ∪ { Y ,t ( x ) > Y ,t ( x − ǫ ) } , we get P ( ∃ ǫ > Y ,t ( x − ǫ ) = Y ,t ( x + ǫ ) | t > τ ,x ) = 1. Following the same steps as the case x = 0, we show the lemma for a fixed t a.s. Finally, the result easily extends almost surely forall t . 17e close this section by the Lemma 4.
With probability 1, for all ( s , x ) = ( s , x ) ∈ Q simultaneously(i) T x ,x s ,s := inf { r ≥ sup ( s , s ) : Y s ,r ( x ) = Y s ,r ( x ) } < ∞ ,(ii) T x ,x s ,s > sup( τ s ,x , τ s ,x ) ,(iii) Y s ,T x ,x s ,s ( x ) = Y s ,T x ,x s ,s ( x ) = 0 , (iv) Y s ,r ( x ) = Y s ,r ( x ) ∀ r ≥ T x ,x s ,s . Proof. (i) is a consequence of Proposition 4, the independence of increments and the coalescenceof Y .(ii) Fix ( s , x ) = ( s , x ) ∈ Q with s ≤ s . By the comparison principle (12) andProposition 4, Y s ,t ( x ) ≥ Y s ,t ( x ) for all t ≥ s or Y s ,t ( x ) ≤ Y s ,t ( x ) for all t ≥ s . Suppose forexample that 0 < z := Y s ,s ( x ) < x and take a rational r ∈ ] z, x [. Then T x ,x s ,s > τ s ,z ≥ τ s ,x and T x ,x s ,s ≥ T r,x s ,s > τ s ,x . (iii) is clear since coalescence occurs in 0. (iv) is an immediateconsequence of the pathwise uniqueness of (3). ( E ) . We now extend the notations given in Section 2.2.1. For all n ≥
0, let D n = { k n , k ∈ Z } and D be the set of all dyadic numbers: D = ∪ n ∈ N D n . For u < v , define n ( u, v ) = inf { n ∈ N : D n ∩ ] u, v [ = ∅} and f ( u, v ) = inf D n ( u,v ) ∩ ] u, v [. Denote by G Q = { x ∈ G : | x | ∈ Q + } . We alsofix a bijection ψ : N −→ Q × G Q and set ( s i , x i ) = ψ ( i ) for all i ≥ ϕ solution of ( E ) . Let W be a real white noise and Y be the flow of the SBM ( α + ) constructed from W inthe previous section. We first construct ϕ s, · ( x ) for all ( s, x ) ∈ Q × G Q and then extend thisdefinition for all ( s, x ) ∈ R × G . We begin by ϕ s , · ( x ), then ϕ s , · ( x ) and so on. To define ϕ s , · ( x ), we flip excursions of Y s , · ( ε ( x ) | x | ) suitably. Then let ϕ s ,t ( x ) be equal to ϕ s ,t ( x )if Y s ,t ( ε ( x ) | x | ) = Y s ,t ( ε ( x ) | x | ). Before coalescence of Y s , · ( ε ( x ) | x | ) and Y s , · ( ε ( x ) | x | ),we define ϕ s , · ( x ) by flipping excursions of Y s , · ( ε ( x ) | x | ) independently of what happens to ϕ s , · ( x ) and so on. In what follows, we translate this idea rigorously. Let ~γ + , ~γ − be two18ndependent random variables on any probability space such that ~γ + law = p X i =1 α i α + δ ~e i , ~γ − law = N X j = p +1 α j α − δ ~e j . (15)Let (Ω , A , P ) be a probability space rich enough and W = ( W s,t , s ≤ t ) be a real white noisedefined on it. For all s ≤ t, x ∈ G , let Z s,t ( x ) := Y s,t ( ε ( x ) | x | ) where Y is the flow of Burdzy-Kaspi constructed from W as in Section 3.1.1 if α + / ∈ { , } (= the reflecting Brownian motionassociated to (3) if α + ∈ { , } ).We retain the notations τ s,x , g s,t ( x ) of the previous section (see (11) and (14)). For s ∈ R , x ∈ G define, by abuse of notations τ s,x = τ s,ε ( x ) | x | , g s, · ( x ) = g s, · ( ε ( x ) | x | ) and d s,t ( x ) = inf { r ≥ t : Z s,r ( x ) = 0 } . It will be convenient to set Z s,r ( x ) = ∞ if r < s . For all q ≥ , u , · · · , u q ∈ R , y , · · · , y q ∈ G define T y , ··· ,y q u , ··· ,u q = inf { r ≥ τ u q ,y q : Z u q ,r ( y q ) ∈ { Z u i ,r ( y i ) , i ∈ [1 , q − }} . Let { ( ~γ + s ,x ( r ) , ~γ − s ,x ( r )) , r ∈ D ∩ [ s , + ∞ [ } be a family of independent copies of ( ~γ + , ~γ − ) whichis independent of W . We define ϕ s , · ( x ) by ϕ s ,t ( x ) = x + ~e ( x ) ε ( x ) W s ,t if s ≤ t ≤ τ s ,x t > τ s ,x , Z s ,t ( x ) = 0 ~γ + s ,x ( f ) | Z s ,t ( x ) | , f = f ( g s ,t ( x ) , d s ,t ( x )) if t > τ s ,x , Z s ,t ( x ) > ~γ − s ,x ( f ) | Z s ,t ( x ) | , f = f ( g s ,t ( x ) , d s ,t ( x )) if t > τ s ,x , Z s ,t ( x ) < ϕ s , · ( x ) , · · · , ϕ s q − , · ( x q − ) are defined and let { ( ~γ + s q ,x q ( r ) , ~γ − s q ,x q ( r )) , r ∈ D ∩ [ s q , + ∞ [ } be a family of independent copies of ( ~γ + , ~γ − ) which is also independent of σ (cid:0) ~γ + s i ,x i ( r ) , ~γ − s i ,x i ( r ) , r ∈ D ∩ [ s i , + ∞ [ , ≤ i ≤ q − , W (cid:1) . Since T x , ··· ,x q s , ··· ,s q < ∞ , let i ∈ [1 , q − s i , x i ) such that Z s q ,t ( x q ) = Z s i ,t ( x i ) with t = T x , ··· ,x q s , ··· ,s q . We define ϕ s q , · ( x q ) by ϕ s q ,t ( x q ) = x q + ~e ( x q ) ε ( x q ) W s q ,t if s q ≤ t ≤ τ s q ,x q t > τ s q ,x q , Z s q ,t ( x q ) = 0 ~γ + s q ,x q ( f q ) | Z s q ,t ( x q ) | , f q = f ( g s q ,t ( x q ) , d s q ,t ( x q )) if t ∈ [ τ s q ,x q , t ] , Z s q ,t ( x q ) > ~γ − s q ,x q ( f q ) | Z s q ,t ( x q ) | , f q = f ( g s q ,t ( x q ) , d s q ,t ( x q )) if t ∈ [ τ s q ,x q , t ] , Z s q ,t ( x q ) < ϕ s i ,t ( x i ) if t ≥ t In this way, we construct ( ϕ s, · ( x ) , s ∈ Q , x ∈ G Q ).Now, for all s ∈ R , x ∈ G , let ϕ s,t ( x ) = x + ~e ( x ) ε ( x ) W s,t if s ≤ t ≤ τ s,x . If t > τ s,x , then byLemma (3), there exist v ∈ Q , y ∈ G Q such that v < g s,t ( x ) and Z s,r ( x ) = Z v,r ( y ) ∀ r ≥ g s,t ( x ).In this case, we define ϕ s,t ( x ) = ϕ v,t ( y ). Later, we will show that ϕ is a coalescing solution of( E ). K m + ,m − solution of ( E ) . Let m + and m − be two probability measures respectively on ∆ p and ∆ N − p . Let U + , U − be twoindependent random variables on any probability space such that U + law = m + , U − law = m − . (16)Let (Ω , A , P ) be a probability space rich enough and W = ( W s,t , s ≤ t ) be a real white noisedefined on it. We retain the notation introduced in the previous paragraph for all functionsof W . We consider a family { ( U + s ,x ( r ) , U − s ,x ( r )) , r ∈ D ∩ [ s , + ∞ [ } of independent copies of( U + , U − ) which is independent of W .If t > τ s ,x and Z s ,t ( x ) > Z s ,t ( x ) < U + s ,t ( x ) = U + s ,x ( f ) (resp. U − s ,t ( x ) = U − s ,x ( f )) , f = f ( g s ,t ( x ) , d s ,t ( x )) . U + s ,t ( x ) = ( U + ,is ,t ( x )) ≤ i ≤ p (resp. U − s ,t ( x ) = ( U − ,is ,t ( x )) p +1 ≤ i ≤ N ) if Z s ,t ( x ) > , t > τ s ,x (resp. Z s ,t ( x ) < , t > τ s ,x ) and now define K m + ,m − s ,t ( x ) = δ x + ~e ( x ) ε ( x ) W s ,t if s ≤ t ≤ τ s ,x P pi =1 U + ,is ,t ( x ) δ ~e i | Z s ,t ( x ) | if t > τ s ,x , Z s ,t ( x ) > P Ni = p +1 U − ,is ,t ( x ) δ ~e i | Z s ,t ( x ) | if t > τ s ,x , Z s ,t ( x ) < δ if t > τ s ,x , Z s ,t ( x ) = 0Suppose that K m + ,m − s , · ( x ) , · · · , K m + ,m − s q − , · ( x q − ) are defined and let { ( U + s q ,x q ( r ) , U − s q ,x q ( r )) , r ∈ D ∩ [ s q , + ∞ [ } be a family of independent copies of ( U + , U − ) which is also independent of σ (cid:0) U + s i ,x i ( r ) , U − s i ,x i ( r ) , r ∈ D ∩ [ s i , + ∞ [ , ≤ i ≤ q − , W (cid:1) . If t > τ s q ,x q and Z s q ,t ( x q ) > Z s q ,t ( x q ) < U + s q ,t ( x q ) = ( U + ,is q ,t ( x q )) ≤ i ≤ p (resp. U − s q ,t ( x q ) = ( U − ,is q ,t ( x q )) p +1 ≤ i ≤ N ) byanalogy to q = 0. Let i ∈ [1 , q −
1] and ( s i , x i ) such that Z s q ,t ( x q ) = Z s i ,t ( x i ) with t = T x , ··· ,x q s , ··· ,s q .Then, define K m + ,m − s q ,t ( x q ) = δ x q + ~e ( x q ) ε ( x q ) W sq,t if s q ≤ t ≤ τ s q ,x q P pi =1 U + ,is q ,t ( x q ) δ ~e i | Z sq,t ( x q ) | if t > t > τ s q ,x q , Z s q ,t ( x q ) > P Ni = p +1 U − ,is q ,t ( x q ) δ ~e i | Z sq,t ( x q ) | if t > t > τ s q ,x q , Z s q ,t ( x q ) < δ if t ≥ t > τ s q ,x q , Z s q ,t ( x q ) = 0 K m + ,m − s i ,t ( x i ) if t > t In this way, we construct ( K m + ,m − s, ( x ) , s ∈ Q , x ∈ G Q ).Now, for s ∈ R , x ∈ G , let K m + ,m − s,t ( x ) = δ x + ~e ( x ) ε ( x ) W s,t if s ≤ t ≤ τ s,x . If t > τ s,x , let v ∈ Q , y ∈ G Q such that v < g s,t ( x ) and Z s,r ( x ) = Z v,r ( y ) ∀ r ≥ g s,t ( x ). Then, define K m + ,m − s,t ( x ) = K m + ,m − v,t ( y ).In the next section we will show that K m + ,m − is a stochastic flow of kernels on G which solves( E ). 21 .2.3 Construction of ( K m + ,m − , ϕ ) by filtering. Let m + and m − be two probability measures as in Theorem 2 and ( ~γ + , U + ) , ( ~γ − , U − ) be twoindependent random variables satisfying U + = ( U + ,i ) ≤ i ≤ p law = m + , U − = ( U − ,j ) p +1 ≤ j ≤ N law = m − , P ( ~γ + = ~e i |U + ) = U + ,i , ∀ i ∈ [1 , p ] , (17)and P ( ~γ − = ~e j |U − ) = U − ,j , ∀ j ∈ [ p + 1 , N ] . (18)Then, in particular ( ~γ + , ~γ − ) and ( U + , U − ) satisfy respectively (15) and (16).On a probability space (Ω , A , P ) consider the following independent processes • W = ( W s,t , s ≤ t ) a real white noise. • { ( ~γ + s,x ( r ) , U + s,x ( r )) , r ∈ D ∩ [ s, + ∞ [ , ( s, x ) ∈ Q × G Q } a family of independent copies of( ~γ + , U + ). • { ( ~γ − s,x ( r ) , U − s,x ( r )) , r ∈ D ∩ [ s, + ∞ [ , ( s, x ) ∈ Q × G Q } a family of independent copies of( ~γ − , U − ).Now, let ϕ and K m + ,m − be the processes constructed in Sections 3.2.1 and 3.2.2 respec-tively from ( ~γ + , ~γ − , W ) and ( U + , U − , W ). Let σ ( U + , U − , W ) be the σ -field generated by {U + s,x ( r ) , U − s,x ( r ) , r ∈ D ∩ [ s, + ∞ [ , ( s, x ) ∈ Q × G Q } and W . We then have the Proposition 5. (i) For all measurable bounded function f on G , s ≤ t ∈ R , x ∈ G , withprobability 1, K m + ,m − s,t f ( x ) = E [ f ( ϕ s,t ( x )) | σ ( U + , U − , W )] . (ii) For all s, x , with probability 1, ∀ t ≥ s | ϕ s,t ( x ) | = | Z s,t ( x ) | , ϕ s,t ( x ) ∈ G + ⇔ Z s,t ( x ) ≥ and ϕ s,t ( x ) ∈ G − ⇔ Z s,t ( x ) ≤ . (iii) For all s, x = y , with probability 1 t := inf { r ≥ s : ϕ s,r ( x ) = ϕ s,r ( y ) } = inf { r ≥ s : Z s,r ( x ) = Z s,r ( y ) = 0 } and ϕ s,r ( x ) = ϕ s,r ( y ) , ∀ r ≥ t . roof. (i) comes from (17), (18) and the definiton of our flows, (ii) is clear by construction. By(ii) coalescence of ϕ s, · ( x ) and ϕ s, · ( y ) occurs in 0 and so (iii) is clear.Next we will prove that ϕ is a stochastic flow of mappings on G . It remains to prove thatproperties (1) and (4) in the definition are satisfied. As in Lemma 1, property (4) can bederived from the following Lemma 5. ∀ t ≥ s, ǫ > , x ∈ G , we have lim y → x P ( d ( ϕ s,t ( x ) , ϕ s,t ( y )) ≥ ǫ ) = 0 . Proof.
We take s = 0. Notice that for all z ∈ R , we have Y ,t ( z ) = z + W t if 0 ≤ t ≤ τ ,z . Fix ǫ > , x ∈ G + \ { } and y in the same ray as x with | y | > | x | , d ( y, x ) ≤ ǫ . Then d ( ϕ ,t ( x ) , ϕ ,t ( y )) = d ( x, y ) ≤ ǫ for 0 ≤ t ≤ τ , | x | ∧ τ , | y | (= τ , | x | in our case). By Proposition 5(iii), we have ϕ ,t ( x ) = ϕ ,t ( y ) if t ≥ T | x | , | y | . This shows that { d ( ϕ ,t ( x ) , ϕ ,t ( y )) ≥ ǫ } ⊂ { τ , | x | < t < T | x | , | y | } a.s. By Lemma 2, P ( d ( ϕ ,t ( x ) , ϕ ,t ( y )) ≥ ǫ ) ≤ P ( τ , | x | < t < T | x | , | y | ) → y → x, | y | > | x | . By the same way, P ( d ( ϕ ,t ( x ) , ϕ ,t ( y )) ≥ ǫ ) ≤ P ( τ , | y | < t < T | x | , | y | ) → y → x, | y | < | x | . The case x ∈ G − holds similarly. Proposition 6. ∀ s < t < u, x ∈ G : ϕ s,u ( x ) = ϕ t,u ( ϕ s,t ( x )) a.s. roof. Set y = ϕ s,t ( x ). Then, with probability 1, ∀ r ≥ t, Y s,r ( ε ( x ) | x | ) = Y t,r ( Y s,t ( ε ( x ) | x | )) andso, a.s. ∀ r ≥ t Z s,r ( x ) = Z t,r ( y ). All the equalities below hold a.s. • u ≤ τ s,x . We have τ t,y = inf { r ≥ t, Z t,r ( y ) = 0 } = inf { r ≥ t, Z s,r ( x ) = 0 } = τ s,x .Consequently u ≤ τ t,y and ϕ s,u ( x ) = ~e ( x ) | Z s,u ( x ) | = ~e ( y ) | Z t,u ( y ) | = ϕ t,u ( y ) = ϕ t,u ( ϕ s,t ( x )). • t ≤ τ s,x < u . We still have τ t,y = τ s,x and so g t,u ( y ) = g s,u ( x ). It is clear byconstruction that: ϕ s,u ( x ) = ϕ t,u ( y ) = ϕ t,u ( ϕ s,t ( x )). • τ s,x < t, τ t,y ≤ u . Since τ t,y is a common zero of ( Z s,r ( x )) r ≥ s and ( Z t,r ( y )) r ≥ t before u , it comes that g t,u ( y ) = g s,u ( x ) and therefore ϕ s,u ( x ) = ϕ t,u ( y ) = ϕ t,u ( ϕ s,t ( x )). • τ s,x < t, u < τ t,y . In such a case, we have ϕ t,u ( y ) = ~e ( y ) | Z t,u ( y ) | = ~e ( y ) | Z s,u ( x ) | .Since r Z s,r ( x ) does not touch 0 in the interval [ t, u ] and ϕ s,t ( x ) = y , we easily see that ϕ s,u ( x ) = ~e ( y ) | Z s,u ( x ) | = ϕ t,u ( y ). Proposition 7. ϕ is a coalescing solution of ( E ) .Proof. We use these notations: Y u := Y ,u (0) , ϕ u := ϕ ,u (0). We first show that ϕ is an W ( α , · · · , α N ) on G . Define for all n ≥ T n ( Y ) = 0, T nk +1 ( Y ) = inf { r ≥ T nk ( Y ) , d ( ϕ r , ϕ T nk ) = 12 n } = inf { r ≥ T nk ( Y ) , | Y r − Y T nk | = 12 n } = inf { r ≥ T nk ( Y ) , || Y r | − | Y T nk || = 12 n } , k ≥ . Remark that | Y | is a reflected Brownian motion and denote T nk ( Y ) simply by T nk . From theproof of Proposition 2, lim n → + ∞ sup t ≤ K | T n ⌊ n t ⌋ − t | = 0 a.s. for all K >
0. Set ϕ nk = 2 n ϕ T nk . Then,since almost surely t −→ ϕ t is continuous, a.s. ∀ t ≥ , lim n → + ∞ n ϕ n ⌊ n t ⌋ = ϕ t . By Proposition2, it remains to show that for all n ≥
0, ( ϕ nk , k ≥
0) is a Markov chain (started at 0) whosetransition mechanism is described by (8). If Y nk = 2 n Y T nk , then, by the proof of Proposition 2(since SBM is a special case of W ( α , · · · , α N )), for all n ≥
0, ( Y nk ) k ≥ is a Markov chain on Z started at 0 whose law is described by Q (0 ,
1) = 1 − Q (0 , −
1) = α + , Q ( m, m + 1) = Q ( m, m −
1) = 12 ∀ m = 0 . Let k ≥ x , .., x k ∈ G such that x = x k = 0 and | x h +1 − x h | = 1 if h ∈ [0 , k − { x h , x h = 0 , h ∈ [1 , k ] } = { x i , .., x i q } , i = 0 < i < · · · < i q = k { x h , x h = 0 , h ∈ [1 , k ] } = { x h } h ∈ [ i +1 ,i − ∪ · · · ∪ { x h } h ∈ [ i q − +1 ,i k − ] . Assume that { x h } h ∈ [ i +1 ,i − ⊂ D j , · · · , { x h } h ∈ [ i q − +1 ,i k − ] ⊂ D j q − and define A nh = ( Y nh = ε ( x h ) | x h | ) , E = ( ~e ( ϕ ni +1 ) = ~e j , · · · , ~e ( ϕ ni q − +1 ) = ~e j q − ) . If i ∈ [1 , p ], we have( ϕ nk +1 = ~e i , ϕ nk = x k , · · · , ϕ n = x ) = k \ h =0 A nh \ ( Y nk +1 − Y nk = 1) \ E \ ( ~e ( ϕ nk +1 ) = ~e i )and ( ϕ nk = x k , · · · , ϕ n = x ) = k \ h =0 A nh \ E . Now P ( ϕ nk +1 = ~e i | ϕ n = x , · · · , ϕ nk = 0) = α i α + P ( Y nk +1 − Y nk = 1 | Y nk = 0) = α i . Obviously, the previous argument can be applied to show that the transition probabilities of( ϕ nk , k ≥
0) are given by (8) and so ϕ is an W ( α , · · · , α N ) on G started at 0. Using (9) for ϕ ,it follows that ∀ f ∈ D ( α , · · · α N ), f ( ϕ t ) = f (0) + Z t f ′ ( ϕ s ) dB s + 12 Z t f ′′ ( ϕ s ) ds where B t = | ϕ t | − ˜ L t ( | ϕ | ) = | Y t | − ˜ L t ( | Y | ) = Z t f sgn( Y s ) dY s by Tanaka’s formula for symmetric local time. But Y solves (3) and therefore R t f sgn( Y s ) dY s = R t f sgn( Y s ) W ( ds ). Since a.s. f sgn( Y s ) = ε ( ϕ s ) for all s ≥
0, it comes that ∀ f ∈ D ( α , · · · α N ), f ( ϕ ,t ( x )) = f ( x ) + Z t f ′ ( ϕ ,s ( x )) ε ( ϕ ,s ( x )) W ( ds ) + 12 Z t f ′′ ( ϕ ,s ( x )) ds when x = 0. Finally, by distinguishing the cases t τ ,x and t > τ ,x , we see that the previousequation is also satisfied for x = 0. Corollary 1. K m + ,m − is a stochastic flow of kernels solution of ( E ) . roof. By Proposition 5 (i) and Jensen inequality, K m + ,m − is a stochastic flow of kernels. Thefact that K m + ,m − is a solution of ( E ) is a consequence of the previous proposition and is similarto Lemma 4.6 [12]. Remarks 4. (i) Define ˆ K s,t ( x, y ) = K m + ,m − s,t ( x ) ⊗ δ ϕ s,t ( y ) . Then ˆ K is a stochastic flow ofkernels on G .(ii) If ( m + , m − ) = ( δ ( α α + , ··· , αpα + ) , δ ( αp +1 α − , ··· , αNα − ) ) , then K Ws,t ( x ) = δ x + ~e ( x ) ε ( x ) W s,t { t ≤ τ s,x } (19)+ (cid:0) p X i =1 α i α + δ ~e i | Z s,t ( x ) | { Z s,t ( x ) > } + N X i = p +1 α i α − δ ~e i | Z s,t ( x ) | { Z s,t ( x ) ≤ } (cid:1) { t>τ s,x } is a Wiener solution of ( E ) .(iii) If ( m + , m − ) = (cid:0) p X i =1 α i α + δ (0 ,.., , , ,.., , N X i = p +1 α i α − δ (0 ,.., , , ,.., (cid:1) , then K m + ,m − = δ ϕ . ( E ) . Let K be a solution of ( E ) and fix s ∈ R , x ∈ G . Then ( K s,t ( x )) t ≥ s can be modified in sucha way, a.s., the mapping t K s,t ( x ) is continuous from [ s, + ∞ [ into P ( G ). We will alwaysconsider this modification for ( K s,t ( x )) t ≥ s . Lemma 6.
Let ( K, W ) be a solution of ( E ) . Then ∀ x ∈ G, s ∈ R , a.s.K s,t ( x ) = δ x + ~e ( x ) ε ( x ) W s,t , if s ≤ t ≤ τ s,x where τ s,x = inf { r ≥ s, ε ( x ) | x | + W s,r = 0 } . Proof.
We follow [12] (Lemma 3.1). Assume that x = 0 , x ∈ D i , 1 ≤ i ≤ p and take s = 0. Let β i = 1 and consider a set of numbers ( β j ) ≤ j ≤ N,j = i such that N X j =1 β j α j = 0. If f ( h~e j ) = β j h forall 1 ≤ j ≤ N , then f ∈ D ( α , · · · , α N ). Set ˜ τ x = inf { r ; K ,r ( x )( ∪ j = i D j ) > } and apply f in( E ) to get Z D i \{ } | y | K ,t ( x, dy ) = | x | + W t for all t ≤ ˜ τ x . (20)26y applying f k ( y ) = | y | e −| y | k , k ≥ E ), we have for all t ≥ K ,t ∧ ˜ τ x f k ( x ) = f k ( x ) + Z t [0 , ˜ τ x ] ( u ) K ,u ( εf ′ k )( x ) W ( du ) + 12 Z t ∧ ˜ τ x K ,u f ′′ k ( x ) du. As k → ∞ , K ,t ∧ ˜ τ x f k ( x ) tends to R t | y | K ,t ∧ ˜ τ x ( x, dy ) by monotone convergence. Let A > , xe − x ≤ A for all x ≥
0. Since | f ′ k ( y ) − | y || ≤ (4 + A ) | y | , Z t [0 , ˜ τ x ] ( u ) K ,u ( εf ′ k )( x ) W ( du ) −→ Z t [0 , ˜ τ x ] ( u ) Z G | y | K ,u ( x, dy ) W ( du )as k → ∞ using (20) and dominated convergence for stochastic integrals ([15] page 142). From | f ′′ k ( y ) | ≤ e − k | y | + Ak | y | , we get R t ∧ ˜ τ x K ,u f ′′ k ( x ) du −→ k → ∞ . By identifying the limits,we have Z D i \{ } ( | y | − | x | − W t ) K ,t ( x, dy ) = 0 ∀ t ≤ ˜ τ x . This proves that for t ≤ ˜ τ x , K ,t ( x ) = δ x + ~e ( x ) W t . The fact that τ ,x = ˜ τ x easily follows.The previous lemma entails the following Corollary 2. If ( K, W ) is a solution of ( E ) , then σ ( W ) ⊂ σ ( K ) .Proof. For all x ∈ D , we have K ,t ( x ) = δ ~e ( | x | + W t ) if t ≤ τ ,x . If f is a positive function on G such that f ( h ) = h , then W t = K ,t f ( x ) − | x | for all t ≤ τ ,x , x ∈ D . By considering asequence ( x k ) k ≥ converging to ∞ , this shows that σ ( W t ) ⊂ σ ( K ,t ( y ) , y ∈ D ). In order to complete the proof of Theorem 1, we will prove the following
Proposition 8.
Equation ( E ) has at most one Wiener solution: If K and K ′ are two Wienersolutions, then for all s ≤ t, x ∈ G, K s,t ( x ) = K ′ s,t ( x ) a.s.Proof. Denote by P the semigroup of W ( α , · · · , α N ), A and D ( A ) being respectively its gen-erator and its domain on C ( G ). Recall the definition of D ′ ( α , · · · , α N ) from (10) and that ∀ t > P t ( C ( G )) ⊂ D ′ ( α , · · · , α N ) ⊂ D ( A )27see Proposition 3). Define S = { f : G −→ R : f, f ′ , f ′′ ∈ C b ( G ∗ ) and are extendable by continuity at 0 on each ray,lim x →∞ f ( x ) = 0 } .For t > , h a measurable bounded function on G ∗ , let λ t h ( x ) = 2 p t h j ( | x | ), if x ∈ D j , where h j is the extension of h j that equals 0 on ] − ∞ , P :( P t f ) ′ = − P t f ′ + λ t f ′ on G ∗ for all f ∈ S . (21)Fix f ∈ S . We will verify that ( P t f ) ′ ∈ S . For x = h~e j ∈ G ∗ , we have( P t f ) ′ ( x ) = − N X i =1 α i Z R f ′ i ( y − h ) p t (0 , y ) dy + Z R f ′ j ( y + h ) p t (0 , y ) dy + Z R f ′ j ( y − h ) p t (0 , y ) dy Clearly ( P t f ) ′ ∈ C b ( G ∗ ) and is extendable by continuity at 0 on each ray. Furthermore, a simpleintegration by parts yields Z R f ′ j ( y + h ) p t (0 , y ) dy = C Z R f j ( y + h ) yp t (0 , y ) dy for some C ∈ R and since lim x →∞ f ( x ) = 0, we get lim x →∞ ( P t f ) ′ ( x ) = 0. It is also easy to check that( P t f ) ′′ , ( P t f ) ′′′ ∈ C b ( G ∗ ) and are extendable by continuity at 0 on each ray which shows that( P t f ) ′ ∈ S .Let ( K, W ) be a stochastic flow that solves ( E ) (not necessarily a Wiener flow) and fix x = h~e j ∈ G ∗ . Our aim now is to establish the following identity K ,t f ( x ) = P t f ( x ) + Z t K ,u ( D ( P t − u f ))( x ) W ( du ) (22)where Dg ( x ) = ε ( x ) .g ′ ( x ). Note that R t K ,u ( D ( P t − u f ))( x ) W ( du ) is well defined. In fact Z t E [ K ,u ( D ( P t − u f ))( x )] du ≤ Z t P u (( D ( P t − u f )) )( x ) du ≤ Z t || ( P t − u f ) ′ || ∞ du and the right-hand side is bounded since (21) is satisfied and f ′ is bounded. Set g = P ǫ f = P ǫ P ǫ f . Then, since P ǫ f ∈ C ( G ) (lim x →∞ P ǫ f ( x ) = 0 comes from lim x →∞ f ( x ) = 0), we have g ∈ D ′ ( α , · · · , α N ). Now K ,t g ( x ) − P t g ( x ) − Z t K ,u ( D ( P t − u g ))( x ) W ( du ) = n − X p =0 ( K , ( p +1) tn P t − ( p +1) tn g − K , ptn P t − ptn g )( x )28 n − X p =0 Z ( p +1) tnptn K ,u D (( P t − u − P t − ( p +1) tn ) g )( x ) W ( du ) − n − X p =0 Z ( p +1) tnptn K ,u D ( P t − ( p +1) tn g )( x ) W ( du ) . For all p ∈ { , .., n − } , g p,n = P t − ( p +1) tn g ∈ D ′ ( α , · · · , α N ) and so by replacing in ( E ), we get Z ( p +1) tnptn K ,u Dg p,n ( x ) W ( du ) = K , ( p +1) tn g p,n ( x ) − K , ptn g p,n ( x ) − Z ( p +1) tnptn K ,u Ag p,n ( x ) du = K , ( p +1) tn g p,n ( x ) − K , ptn g p,n ( x ) − tn K , ptn Ag p,n ( x ) − Z ( p +1) tnptn ( K ,u − K , ptn ) Ag p,n ( x ) du Then we can write K ,t g ( x ) − P t g ( x ) − Z t K ,u ( D ( P t − u g ))( x ) W ( du ) = A ( n ) + A ( n ) + A ( n ) , where A ( n ) = − n − X p =0 K , ptn [ P t − ptn g − P t − ( p +1) tn g − tn .AP t − ( p +1) tn g ]( x ) ,A ( n ) = − n − X p =0 Z ( p +1) tnptn K ,u D (( P t − u − P t − ( p +1) tn ) g )( x ) W ( du ) ,A ( n ) = n − X p =0 Z ( p +1) tnptn ( K ,u − K , ptn ) AP t − ( p +1) tn g ( x ) du. Using || K ,u f || ∞ ≤ || f || ∞ if f is a bounded measurable function, we obtain | A ( n ) | ≤ n − X p =0 || P t − ( p +1) tn [ P tn g − g − tn .Ag ] || ∞ ≤ n || P tn g − g − tn .Ag || ∞ , with n || P tn g − g − tn .Ag || ∞ = t. || P t n g − gt n − Ag || ∞ ( t n := tn ) . Since g ∈ D ( A ), this shows that A ( n ) converges to 0 as n → ∞ . Note that A ( n ) is the sumof orthogonal terms in L (Ω). Consequently || A ( n ) || L (Ω) = n − X p =0 || Z ( p +1) tnptn K ,u D (( P t − u − P t − ( p +1) tn ) g )( x ) W ( du ) || L (Ω) . By applying Jensen inequality, we arrive at || A ( n ) || L (Ω) ≤ n − X p =0 Z ( p +1) tnptn P u V u ( x ) du V u = ( P t − u g ) ′ − ( P t − ( p +1) tn g ) ′ . By (21), one can decompose V u as follows: V u = X u + Y u ; X u = − P t − u g ′ + P t − ( p +1) tn g ′ , Y u = λ t − u g ′ − λ t − ( p +1) tn g ′ Using the trivial inequality ( a + b ) ≤ a + 2 b , we obtain: P u V u ( x ) ≤ P u X u ( x ) + 2 P u Y u ( x )and so || A ( n ) || L (Ω) ≤ B ( n ) + 2 B ( n )where B ( n ) = n − X p =0 Z ( p +1) tnptn P u X u ( x ) du, B ( n ) = n − X p =0 Z ( p +1) tnptn P u Y u ( x ) du .If p ∈ [0 , n −
1] and u ∈ [ ptn , ( p +1) tn ], then P u X u ( x ) ≤ P u + t − p +1 n t ( g ′ − P p +1 n t − u g ′ ) ( x ). The changeof variable v = ( p + 1) t − nu yields B ( n ) ≤ Z t P t − vn ( P vn g ′ − g ′ ) ( x ) dv ≤ Z t ( P t g ′ ( x ) − P t − vn ( g ′ P vn g ′ )( x ) + P t − vn g ′ ( x )) dv. By writing P t − vn ( g ′ P vn g ′ )( x ) as a function of p , we prove that lim n →∞ P t − vn ( g ′ P vn g ′ )( x ) = P t g ′ ( x ). Since g ′ is bounded, by dominated convergence this shows that B ( n ) tends to 0as n → + ∞ . For B ( n ), we write P u Y u ( x ) = 2 N X i =1 α i p u (( Y u ) i )( −| x | ) + p u (( Y u ) j )( | x | ) − p u (( Y u ) j )( −| x | )where ( Y u ) i = 2 p t − u g ′ i − p t − ( p +1) tn g ′ i , defined on R ∗ + . It was shown before that this quantitytends to 0 as n → + ∞ when ( p, g ′ i ) is replaced by ( P, g ′ ) in general and consequently B ( n )tends to 0 as n → + ∞ . Now || A ( n ) || L (Ω) ≤ n − X p =0 || Z ( p +1) tnptn ( K ,u − K , ptn ) AP t − ( p +1) tn g ( x ) du || L (Ω) . Set h p,n = AP t − ( p +1) tn g . Then h p,n ∈ D ′ ( α , · · · , α N ) for all p ∈ [0 , n −
1] (if p = n − h p,n = P ǫ AP ǫ f ). By the Cauchy-Schwarz inequality || A ( n ) || L (Ω) ≤ √ t ( n − X p =0 Z ( p +1) tnptn E [(( K ,u − K , ptn ) h p,n ( x )) ] du ) . u ∈ [ ptn , ( p +1) tn ]: E [(( K ,u − K , ptn ) h p,n ( x )) ] ≤ E [ K , ptn ( K ptn ,u h p,n − h p,n ) ( x )] ≤ E [ K , ptn ( K ptn ,u h p,n − h p,n K ptn ,u h p,n + h p,n )( x )] ≤ || P u − ptn h p,n − h p,n P u − ptn h p,n + h p,n || ∞ ≤ || h p,n || ∞ || P u − ptn h p,n − h p,n || ∞ + || P u − ptn h p,n − h p,n || ∞ . Therefore || A ( n ) || L (Ω) ≤ √ t (2 C ( n ) + C ( n )) , where C ( n ) = n − X p =0 || h p,n || ∞ Z ( p +1) tnptn || P u − ptn h p,n − h p,n || ∞ du, C ( n ) = n − X p =0 Z ( p +1) tnptn || P u − ptn h p,n − h p,n || ∞ du. From || h p,n || ∞ ≤ || Ag || ∞ and || P u − ptn h p,n − h p,n || ∞ ≤ || P u − ptn Ag − Ag || ∞ , we get C ( n ) ≤ || Ag || ∞ n − X p =0 Z ( p +1) tnptn || P u − ptn Ag − Ag || ∞ du ≤ || Ag || ∞ Z t || P zn Ag − Ag || ∞ dz. As Ag ∈ C ( G ), C ( n ) tends to 0 obviously. On the other hand, h p,n ∈ D ( α , · · · , α N ) (thiscan be easily verified since h p,n is continuous and N X i =0 α i ( h p,n ) ′ i (0+) = 0). We may apply (9) toget C ( n ) = n n − X p =0 Z t || P zn h p,n − h p,n || ∞ dz ≤ n n − X p =0 Z t Z zn || ( h p,n ) ′′ || ∞ dudz .Now we verify that h ′ p,n , h ′′ p,n are uniformly bounded with respect to n and 0 ≤ p ≤ n −
1. Infact || h ′′ p,n || ∞ = || Ah p,n || ∞ ≤ || AP ǫ f || ∞ . Write h p,n = P t − p +1 n t + ǫ P ǫ AP ǫ f where P ǫ AP ǫ f ∈ D ′ ( α , · · · , α N ). Then, by (21), || h ′ p,n || ∞ is uniformly bounded with respect to n, p ∈ [0 , n − || ( h p,n ) ′′ || ∞ . As a result C ( n ) tends to 0 as n → ∞ . Finally K ,t g ( x ) = P t g ( x ) + Z t K ,u ( D ( P t − u g ))( x ) W ( du ) . Now, let ǫ go to 0, then K ,t g ( x ) tends to K ,t f ( x ) in L (Ω). Furthermore || Z t K ,u ( D ( P t − u g ))( x ) W ( du ) − Z t K ,u ( D ( P t − u f ))( x ) W ( du ) || L (Ω) ≤ Z t P u (( P t − u g ) ′ − ( P t − u f ) ′ ) ( x ) du. Using the derivation formula (21), the right side may be decomposed as I ǫ + J ǫ , where I ǫ = Z t P u ( P t − u g ′ − P t − u f ′ ) ( x ) du, J ǫ = Z t P u ( λ t − u g ′ − λ t − u f ′ ) ( x ) du.
31y Jensen inequality, I ǫ ≤ tP t ( g ′ − f ′ ) ( x ). Since g ′ ( y ) = − P ǫ f ′ ( y ) + 2 λ ǫ f ′ ( y ) −→ f ′ ( y ) as ǫ → , P t ( x, dy ) a.s. , we get I ǫ −→ ǫ → J ǫ tendsto 0 as ǫ →
0. This establishes (22). Now assume that (
K, W ) is a Wiener solution of ( E )and let f ∈ S . Since K ,t f ( x ) ∈ L ( F W , · ∞ ) , let K ,t f ( x ) = P t f ( x ) + P ∞ n =1 J nt f ( x ) be thedecomposition in Wiener chaos of K ,t f ( x ) in L sense ([15] page 202). By iterating (22) (recallthat ( P t f ) ′ ∈ S ), we see that for all n ≥ J nt f ( x ) = Z
We already know that K W given by (19) is a Wiener solution of ( E ). Since σ ( W ) ⊂ σ ( K ), we can define K ∗ the stochastic flow obtained by filtering K with respect to σ ( W ) (Lemma 3-2 (ii) in [11]). Then ∀ s ≤ t, x ∈ G, K ∗ s,t ( x ) = E [ K s,t ( x ) | σ ( W )] a.s . As aresult, ( K ∗ , W ) solves also ( E ) and by the last proposition, we have: ∀ s ≤ t, x ∈ G, E [ K s,t ( x ) | σ ( W )] = K Ws,t ( x ) a.s. (23) From now on, ( K, W ) is a solution of ( E ) defined on (Ω , A , P ). Let P nt = E [ K ⊗ n ,t ] be thecompatible family of Feller semigroups associated to K . We retain the notations introducedin Section 3 for all functions of W ( Y s,t ( x ) , Z s,t ( x ) , g s,t ( x ) · · · ). In the next section, startingfrom K , we construct a flow of mappings ϕ c which is a solution of ( E ). This flow will play animportant role to characterize the law of K . ( E ) from K . Let x ∈ G , t >
0. By (23), on { t > τ ,x } , K ,t ( x ) is supported on {| Z ,t ( x ) | ~e i , ≤ i ≤ p } if Z ,t ( x ) > {| Z ,t ( x ) | ~e i , p + 1 ≤ i ≤ N } if Z ,t ( x ) ≤ . In [11] (Section 2.6), the n point motion X n started at ( x , · · · , x n ) ∈ G n and associated with P n has been constructed on an extension Ω × Ω ′ of Ω such that the law of ω ′ X nt ( ω, ω ′ ) isgiven by K ,t ( x , dy ) · · · K ,t ( x n , dy n ). For each ( x, y ) ∈ G , let ( X xt , Y yt ) t ≥ be the two pointmotion started at ( x, y ) associated with P as preceded. Then | X xt | = | Z ,t ( x ) | , | Y yt | = | Z ,t ( y ) | for all t ≥ T x,y := inf { r ≥ , X xr = Y yr } < + ∞ a.s . To ( P n ) n ≥ , we associate a compatible family of Markovian coalescent semigroups ( P n,c ) n ≥ asdescribed in [11] (Theorem 4.1): Let X n be the n point motion started at ( x , · · · , x n ) ∈ G n .We denote the ith coordinate of X nt by X nt ( i ). Let T = inf { u ≥ , ∃ i < j, X nu ( i ) = X nu ( j ) } , X n,ct := X nt , t ∈ [0 , T ] . Suppose that X nT ( i ) = X nT ( j ) with i < j . Then define the process X n, t ( h ) = X nt ( h ) for h = j, X n, t ( j ) = X n, t ( i ) , t ≥ T . Note that the ith coordinate of X n, and the jth one are equal. Now set T = inf { u ≥ T , ∃ h < k, h = j, k = j, X n, u ( h ) = X n, u ( k ) } . For t ∈ [ T , T ], we define X n,ct = X n, t and so on. In this way, we construct a Markov process X n,c such that for all i, j ∈ [1 , n ], X n,c ( i ) and X n,c ( j ) meet after a finite time and then stick together. Let P n,ct ( x , · · · , x n , dy ) be the law of X n,ct . Then we have: Lemma 7. ( P n,c ) n ≥ is a compatible family of Feller semigroups associated with a coalescingflow of mappings ϕ c .Proof. By Theorem 4 . ∀ t > , ε > , x ∈ G, lim y → x P ( { T x,y > t } ∩ { d ( X xt , Y yt ) > ε } ) = 0 ( C ) . As | X xu | = | Z ,u ( x ) | , | Y yu | = | Z ,u ( y ) | for all u ≥
0, we have { t < T x,y } ⊂ { t < T ε ( x ) | x | ,ε ( y ) | y | } . For y close to x , { d ( X xt , Y yt ) > ε } ⊂ { inf( τ ,x , τ ,y ) < t } . Now ( C ) holds from Lemma 2.33 onsequence: Let ν (respectively ν c ) be the Feller convolution semigroup associated with( P n ) n ≥ (respectively ( P n,c ) n ≥ ). By the proof of Theorem 4.2 [12], there exists a joint real-ization ( K , K ) where K and K are two stochastic flows of kernels satisfying K law = δ ϕ c , K law = K and such that:(i) ˆ K s,t ( x, y ) = K s,t ( x ) ⊗ K s,t ( y ) is a stochastic flow of kernels on G ,(ii) For all s ≤ t, x ∈ G, K s,t ( x ) = E [ K s,t ( x ) | K ] a.s.For s ≤ t , letˆ F s,t = σ ( ˆ K u,v , s ≤ u ≤ v ≤ t ) , F is,t = σ ( K iu,v , s ≤ u ≤ v ≤ t ) , i = 1 , . Then ˆ F s,t = F s,t ∨ F s,t . To simplify notations, we shall assume that ϕ c is defined on the originalspace (Ω , A , P ) and that (i) and (ii) are satisfied if we replace ( K , K ) by ( δ ϕ c , K ). Recall that(i) and (ii) are also satisfied by the pair ( δ ϕ , K m + ,m − ) constructed in Section 3. Now K s,t ( x ) = E [ δ ϕ cs,t ( x ) | K ] a.s. for all s ≤ t, x ∈ G, (24)and using (23), we obtain K Ws,t ( x ) = E [ δ ϕ cs,t ( x ) | σ ( W )] a.s. for all s ≤ t, x ∈ G, (25)with K W being the Wiener flow given by (19). Proposition 9.
The stochastic flow ϕ c solves ( E ) .Proof. Fix t > , x ∈ G . By (25), δ ϕ c ,t ( x ) is supported on {| Z ,t ( x ) | ~e j , ≤ j ≤ N } a.s. and so | ϕ c ,t ( x ) | = | Z ,t ( x ) | . Similarly, using (25), we have ϕ c ,t ( x ) ∈ G + ⇔ Z ,t ( x ) ≥ ϕ c ,t ( x ) ∈ G − ⇔ Z ,t ( x ) ≤ . (26)Consequently ε ( ϕ c ,t ( x )) = f sgn( Z ,t ( x )) a.s. Since ϕ c , · ( x ) is an W ( α , · · · , α N ) started at x , itsatisfies Theorem 3; ∀ f ∈ D ( α , · · · , α N ), f ( ϕ c ,t ( x )) = f ( x ) + Z t f ′ ( ϕ c ,u ( x )) dB u + 12 Z t f ′′ ( ϕ c ,u ( x )) du a.s. B t = | ϕ ,t ( x ) | − ˜ L t ( | ϕ , · ( x ) | ) − | x | = | Z ,t ( x ) | − ˜ L t ( | Z , · ( x ) | ) − | x | . Tanaka’s formula and(26) yield B t = Z t f sgn( Z ,u ( x )) dZ ,u ( x ) = Z t f sgn( Z ,u ( x )) W ( du ) = Z t ε ( ϕ c ,u ( x )) W ( du ) . Likewise for all s ≤ t, x ∈ G, f ∈ D ( α , · · · , α N ), f ( ϕ cs,t ( x )) = f ( x ) + Z ts f ′ ( ϕ cs,u ( x )) ε ( ϕ cs,u ( x )) W ( du ) + 12 Z ts f ′′ ( ϕ cs,u ( x )) du a.s. We will see later (Remark 3) that ϕ c law = ϕ where ϕ is the stochastic flow of mappingsconstructed in Section 3. K . For all t ≥ τ s,x , set V + ,is,t ( x ) = K s,t ( x )( D i \ { } ) ∀ ≤ i ≤ p and V − ,Ns,t ( x ) = K s,t ( x )( D N ) , V − ,is,t ( x ) = K s,t ( x )( D i \ { } ) ∀ p + 1 ≤ i ≤ N − V + s,t ( x ) = ( V + ,is,t ( x )) ≤ i ≤ p , V − s,t ( x ) = ( V − ,is,t ( x )) p +1 ≤ i ≤ N , V s,t ( x ) = ( V + s,t ( x ) , V − s,t ( x )) . For s = 0, we use these abbreviated notations Z t ( x ) = Z ,t ( x ) , V + t ( x ) = V +0 ,t ( x ) , V − t ( x ) = V − ,t ( x ) , V t ( x ) = ( V + t ( x ) , V − t ( x ))and if x = 0, Z t = Z ,t (0) , V + t = V +0 ,t (0) , V − t = V − ,t (0) , V t = ( V + t , V − t ) . By (23), ∀ x ∈ G, s ≤ t , with probability 1 K s,t ( x ) = δ x + ~e ( x ) ε ( x ) W s,t { t ≤ τ s,x } + ( p X i =1 V + ,is,t ( x ) δ ~e i | Z s,t ( x ) | { Z s,t ( x ) > } + N X i = p +1 V − ,is,t ( x ) δ ~e i | Z s,t ( x ) | { Z s,t ( x ) ≤ } )1 { t>τ s,x } . Define F Ks,t = σ ( K v,u , s ≤ v ≤ u ≤ t ) , F Ws,t = σ ( W v,u , s ≤ v ≤ u ≤ t )35nd assume that all these σ -fields are right-continuous and include all P -negligible sets. When s = 0, we denote F K ,t , F W ,t simply by F Kt , F Wt . Recall that for all s ∈ R , x ∈ G , the mapping t K s,t ( x ) defined from [ s, + ∞ [ into P ( G ) is continuous. Then the following Markov propertyholds. Lemma 8.
Let x, y ∈ G and T be an ( F Kt ) t ≥ stopping time such that K ,T ( x ) = δ y a.s. Then K , · + T ( x ) is independent of F KT and has the same law as K , · ( y ) . As a consequence of the preceding lemma, for each x ∈ G , K , · + τ ,x ( x ) is independent of F Kτ ,x and is equal in law to K , · (0).Consider the following random times: T = inf { r ≥ Z r = 1 } , L = sup { r ∈ [0 , T ] : Z r = 0 } and the following σ -fields: F L − = σ ( X L , X is bounded ( F Wt ) t ≥ − previsible process) , F L + = σ ( X L , X is bounded ( F Wt ) t ≥ − progressive process) . Then F L + = F L − (Lemma 4.11 in [12]). Let f : R N −→ R be a bounded continuous functionand set X t = E [ f ( V t ) | σ ( W )]. Thanks to (24), the process r V r is constant on the excursionsof r Z r . By following the same steps as in Section 4.2 [12], we show that there is an F W -progressive version of X that is constant on the excursions of Z out of 0 (Lemma 4.12 [12]).We take for X this version. Then X T is F L + measurable and E [ X T |F L − ] = E [ f ( V T )] (Lemma4.13 [12]). This implies that V T is independent of σ ( W ) (Lemma 4.14 [12]) and the same holdsif we replace T by inf { t ≥ Z t = a } where a > T +0 ,n = 0 and for k ≥ S + k,n = inf { t ≥ T + k − ,n : Z t = 2 − n } , T + k,n = inf { t ≥ S + k,n : Z t = 0 } . Set V + k,n = V + S + k,n . Then, we have the following Lemma 9.
For all n, ( V + k,n ) k ≥ is a sequence of i.i.d. random variables. Moreover, this sequenceis independent of W . roof. For all k ≥ V + k,n is σ ( K ,T + k − ,n + t (0) , t ≥
0) measurable and V + k − ,n is F KT + k − ,n measurablewhich proves the first claim by Lemma 8. Now, we show by induction on q that ( V +1 ,n , · · · , V + q,n )is independent of σ ( W ). For q = 1, this has been justified. Suppose ( V +1 ,n , · · · , V + q − ,n ) isindependent of σ ( W ) and write σ ( W ,u , u ≥
0) = σ ( Z u ∧ T + q − ,n , u ≥ ∨ σ ( Z u + T + q − ,n , u > . Since ( V +1 ,n , · · · , V + q − ,n ) is F KT + q − ,n measurable and σ ( Z u + T + q − ,n , u > ∨ σ ( V + q,n ) ⊂ σ ( K ,T + q − ,n + t (0) , t ≥ , we conclude that ( V +1 ,n , · · · , V + q,n ) and σ ( W ) are independent.Let m + n be the common law of ( V + k,n ) k ≥ for each n ≥ m + as the law of V +1 under P ( . | Z > Lemma 10.
The sequence ( m + n ) n ≥ converges weakly towards m + . For all t > , under P ( ·| Z t > , V + t and W are independent and the law of V + t is given by m + .Proof. For each bounded continuous function f : R p −→ R ,E [ f ( V + t ) | W ]1 { Z t > } = lim n → ∞ X k E (cid:2) { t ∈ [ S + k,n ,T + k,n [ } f ( V + k,n ) | W (cid:3) = lim n → ∞ X k { t ∈ [ S + k,n ,T + k,n [ } (cid:18)Z f dm + n (cid:19) = [1 { Z t > } lim n → ∞ Z f dm + n + ε n ( t )]with lim n → ∞ ε n ( t ) = 0 a.s. Consequentlylim n → ∞ Z f dm + n = 1 P ( Z t > E [ f ( V + t )1 { Z t > } ] . The left-hand side does not depend on t , which completes the proof.We define analogously the measure m − by considering the following stopping times: T − ,n = 0and for k ≥ S − k,n = inf { t ≥ T − k − ,n : Z t = − − n } , T − k,n = inf { t ≥ S − k,n : Z t = 0 } . V − k,n = V − S − k,n and let m − n be the common law of ( V − k,n ) k ≥ . Denote by m − the law of V − under P ( . | Z < m − n ) n ≥ converges weakly towards m − . Moreover, forall t >
0, the law of V − t under P ( . | Z t <
0) is given by m − . As a result, we have E [ f ( V − t ) | W ]1 { Z t < } = 1 { Z t < } Z f dm − for each measurable bounded f : R N − p −→ R . If we follow the same steps as before but consider( Z u + τ ,x ( x ) , u ≥
0) for all x , we show that the law of V +0 ,t ( x ) under P ( . | Z ,t ( x ) > , t > τ ,x )does not depend on t >
0. Denote by m + x such a law. Then, thanks to Lemma 8, m + x does notdepend on x ∈ G . Thus m + x = m + for all x and E [ f ( V + t ( x )) | W ]1 { Z t ( x ) > ,t>τ ,x } = 1 { Z t ( x ) > ,t>τ ,x } Z f dm + (27)for each measurable bounded f : R p −→ R . Similarly E [ h ( V − t ( x )) | W ]1 { Z t ( x ) < ,t>τ ,x } = 1 { Z t ( x ) < ,t>τ ,x } Z hdm − (28)for each measurable bounded h : R N − p −→ R . K . Define p ( x ) = | x | ~e { x ∈ G + } + | x | ~e p +1 { x ∈ G − ,x =0 } , x ∈ G. Fix x ∈ G , 0 < s < t and let x s = p ( ϕ c ,s ( x )). Then:(i) ϕ cs,r ( x ) = x + ~e ( x ) ε ( x ) W s,r for all r ≤ τ s,x (from Lemma 6).(ii) τ s,x = τ s,p ( x ) and ϕ cs,r ( x ) = ϕ cs,r ( p ( x )) for all r ≥ τ s,x since ϕ c is a coalescing flow.(iii) τ s,ϕ c ,s ( x ) = τ s,x s and ϕ cs,r ( ϕ c ,s ( x )) = ϕ cs,r ( x s ) for all r ≥ τ s,x s by (ii) and the independenceof increments of ϕ c .(iv) On { t > τ s,x s } , ϕ c ,t ( x ) = ϕ cs,t ( ϕ c ,s ( x )) = ϕ cs,t ( x s ) by the flow property of ϕ c and (iii).(v) Clearly τ s,x s = inf { r ≥ s, Z ,r ( x ) = 0 } a.s. Since { τ ,x < s < g ,t ( x ) } ⊂ { t > τ s,x s } a.s.,we deduce that P ( ϕ c ,t ( x ) = ϕ cs,t ( x s ) | τ ,x < s < g ,t ( x )) = 1 . F ,s and ˆ F s,t are independent ( ˆ K is a flow) and ˆ F ,t = ˆ F ,s ∨ ˆ F s,t . By (24),we have K s,t ( x s ) = E [ δ ϕ cs,t ( x s ) |F K ,t ] and as a result of (v), P ( K s,t ( x s ) = K ,t ( x ) | τ ,x < s < g ,t ( x )) = 1 . (29) Lemma 11.
Let P t,x , ··· ,x n be the law of ( K ,t ( x ) , · · · , K ,t ( x n ) , W ) where t ≥ and x , · · · , x n ∈ G . Then, P t,x , ··· ,x n is uniquely determined by { P u,x , u ≥ , x ∈ G } .Proof. We will prove the lemma by induction on n . For n = 1, this is clear. Notice that if t < τ ,z , then K ,t ( z ) is σ ( W ) measurable and if t > T z ,z , , then K ,t ( z ) = K ,t ( z ). Supposethe result holds for n ≥ x n +1 ∈ G . Then by the previous remark, we only need tocheck that the law of ( K ,t ( x ) , · · · , K ,t ( x n +1 ) , W ) conditionally to A = { sup ≤ i ≤ n +1 τ ,x i < t Let ( K m + ,m − , W ′ ) be the solution constructed in Section 3 associated with ( m + , m − ) . Then K law = K m + ,m − . roof. From (27) and (28), ( K ,t ( x ) , W ) law = ( K m + ,m − ,t ( x ) , W ′ ) for all t > x ∈ G . Noticethat all the properties (i)-(v) mentioned just above are satisfied by the flow ϕ constructed inSection 3 and consequently K m + ,m − satisfies also (29) using the same arguments. By followingthe same steps as in the proof of Lemma 11, we show by induction on n that( K ,t ( x ) , · · · , K ,t ( x n ) , W ) law = ( K m + ,m − ,t ( x ) , · · · , K m + ,m − ,t ( x n ) , W ′ )for all t > , x , · · · , x n ∈ G . This proves the proposition. Remark 3. When K is a stochastic flow of mappings, then by definition ( m + , m − ) = ( p X i =1 α i α + δ (0 ,.., , , ,.., , N X i = p +1 α i α − δ (0 ,.., , , ,.., ) . This shows that there is only one flow of mappings solving ( E ) . α + = , N > . Let K W be the flow given by (19), where Z s,t ( x ) = ε ( x ) | x | + W t − W s . It is easy to verify that K W is a Wiener flow. Fix s ∈ R , x ∈ G . Then, by following ideas of Section 3.2, one canconstruct a real white noise W and a process ( X xs,t , t ≥ s ) which is an W ( α , · · · , α N ) startedat x such that • (i) for all t ≥ s, f ∈ D ( α , · · · , α N ), f ( X xs,t ) = f ( x ) + Z ts ( εf ′ )( X xs,u ) W ( du ) + 12 Z ts f ′′ ( X xs,u ) du a.s. • (ii) for all t ≥ s , K Ws,t ( x ) = E [ δ X xs,t | σ ( W )] a.s . By conditioning with respect to σ ( W ) in (i), this shows that K W solves ( E ). Now, let ( K, W )be any other solution of ( E ) and set P nt = E [ K ⊗ n ,t ]. From the hypothesis α + = , we see that h ( x ) = ε ( x ) | x | belongs to D ( α , · · · , α N ) and by applying h in ( E ), we get K ,t h ( x ) = h ( x )+ W t .Denote by ( X x , X x ) the two-point motion started at ( x , x ) ∈ G associated to P . Since | X x i | is a reflected Brownian motion started at | x i | (Theorem 3), we have E [ | X x i t | ] = t + | x i | .40rom the preceding observation E [ h ( X x t ) h ( X x t )] = E [ K ,t h ( x ) K ,t h ( x )] = h ( x ) h ( x ) + t and therefore E [( h ( X x t ) − h ( X x t ) − h ( x ) + h ( x )) ] = 0 . This shows that h ( X x t ) − h ( X x t ) = h ( x ) − h ( x ). Now we will check by induction on n that P n does not depend on K . For n = 1, this follows from Proposition 3. Suppose the result holdsfor n and let ( x , · · · , x n +1 ) ∈ G n +1 such that h ( x i ) = h ( x j ) , i = j . Let τ x i = inf { r ≥ X x i r =0 } = inf { r ≥ h ( X x i r ) = 0 } and ( x i , x j ) ∈ G + × G − such that h ( x i ) < h ( x k ) , h ( x h ) < h ( x j )for all ( x k , x h ) ∈ G + × G − (when ( x i , x j ) does not exist the proof is simpler). Clearly τ x k is afunction of X x h for all h, k ∈ [1 , n + 1] and so for all measurable bounded f : G n +1 −→ R , f ( X x t , · · · , X x n +1 t )1 { t<τ xi , inf ≤ k ≤ n +1 τ xk = τ xi } is a function of X x i and f ( X x t , · · · , X x n +1 t )1 { t<τ xj , inf ≤ k ≤ n +1 τ xk = τ xj } is a function of X x j . where t > E [ f ( X x t , · · · , X x n +1 t )1 { t< inf ≤ k ≤ n +1 τ xk } ] only depends on P . Consider the following stopping times S = inf ≤ i ≤ n +1 τ x i , S k +1 = inf { r ≥ S k : ∃ j ∈ [1 , n + 1] , X x j r = 0 , X x j S k = 0 } , k ≥ . Remark that ( S k ) k ≥ is a function of X x h for all h ∈ [1 , n + 1]. By summing over all possiblecases we need only check the unicity in law of ( X x t , · · · , X x n +1 t ) conditionally to A = { S k Preliminary remarks. We recall that if Y is a semimartingale satisfying h Y i = h| Y |i then˜ L t ( Y ) = ˜ L t ( | Y | ). Let L t ( Y ) be the (non symmetric) local time at 0 of Y and α ∈ [0 , Y is a SBM ( α ), then L t ( Y ) = 2 α ˜ L t ( Y ) by identifying Tanaka’s formulas for symmetric and nonsymmetric local time for Y .Let Q be the semigroup of the reflecting Brownian motion on R and define Φ ( x ) = | x | . Then X t = Φ ( Z t ) and it can be easily checked that P t ( f ◦ Φ ) = Q t f ◦ Φ for all bounded measurablefunction f : R −→ R which proves (i). (ii) is an easy consequence of Tanaka’s formula for localtime.(iii) Set τ z = inf { r ≥ , Z r = 0 } . For t ≤ τ z , (9) holds from Itˆo’s formula applied to thesemimartingale X . By discussing the cases t ≤ τ z and t > τ z , one can assume that z = 0 andso in the sequel we take z = 0.For all i ∈ [1 , N ], define Z it = | Z t | { Z t ∈ D i } − | Z t | { Z t / ∈ D i } . Then Z it = Φ i ( Z t ) where Φ i ( x ) = | x | { x ∈ D i } − | x | { x/ ∈ D i } . Let Q i be the semigroup of the SBM ( α i ). Then the following relationis easy to check: P t ( f ◦ Φ i ) = Q it f ◦ Φ i for all bounded measurable function f : R −→ R whichshows that Z i is a SBM ( α i ) started at 0. We use the notation ( P ) to denote the convergencein probability.Let δ > 0. Define τ δ = θ δ = 0 and for n > θ δn = inf { r ≥ τ δn − , | Z r | = δ } , τ δn = inf { r ≥ θ δn , Z r = 0 } . Let f ∈ C b ( G ∗ ) and t > 0. Then f ( Z t ) − f (0) = ∞ X n =0 f ( Z θ δn +1 ∧ t ) − f ( Z θ δn ∧ t ) = Q δ + Q δ + Q δ Q δ = ∞ X n =0 ( f ( Z θ δn +1 ∧ t ) − f ( Z τ δn ∧ t )) − N X i =1 ∞ X n =0 δf ′ i (0+)1 { θ δn +1 ≤ t,Z θδn +1 ∈ D i } ,Q δ = N X i =1 ∞ X n =0 δf ′ i (0+)1 { θ δn +1 ≤ t,Z θδn +1 ∈ D i } , Q δ = ∞ X n =0 f ( Z τ δn ∧ t ) − f ( Z θ δn ∧ t ) . We first show that Q δ −−−→ δ → P ) and for this write Q δ = Q δ (1 , + Q δ (1 , with Q δ (1 , = ∞ X n =0 N X i =1 ( f ( Z θ δn +1 ) − f ( Z τ δn ) − δf ′ i (0+))1 { θ δn +1 ≤ t,Z θδn +1 ∈ D i } , Q δ (1 , = ∞ X n =0 N X i =1 ( f ( Z t ) − f ( Z τ δn ∧ t ))1 { θ δn +1 >t,Z θδn +1 ∈ D i } .Since f ∈ C b ( G ∗ ), we have(i) ∀ i ∈ [1 , N ]; Z δ ( f ′ i ( u ) − f ′ i (0+)) du = f i ( δ ) − f i (0) − δf ′ i (0+).(ii) There exists M > ∀ i ∈ [1 , N ] , u ≥ | f ′ i ( u ) − f ′ i (0+) | ≤ M u .Consequently | Q δ (1 , | = | ∞ X n =0 N X i =1 ( f i ( δ ) − f i (0) − δf ′ i (0+))1 { θ δn +1 ≤ t,Z θδn +1 ∈ D i } | ≤ N M δ ∞ X n =0 { θ δn +1 ≤ t } .It is known that δ ∞ X n =0 { θ δn +1 ≤ t } −−−→ δ → L t ( X ) ( P ) ([15]) and therefore Q δ (1 , −−−→ δ → P ).Let C > ∀ i ∈ [1 , N ] , u ≥ | f i ( u ) − f i (0) | ≤ Cu . Then | Q δ (1 , | = | ∞ X n =0 N X i =1 ( f ( Z t ) − f ( Z τ δn ∧ t ))1 { θ δn +1 >t,Z θδn +1 ∈ D i } | ∞ X n =0 N X i =1 | f i ( X t ) − f i (0) | { τ δn 44y Itˆo’s formula and using the fact that d ˜ L s ( X ) is carried by { s : Z s = 0 } . Now the proof ofTheorem 3 is complete. Acknowledgements. I sincerely thank my supervisor Yves Le Jan for his guidance throughmy Ph.D. thesis and for his careful reading of this paper. I am also grateful to Olivier Raimondfor very useful discussions, for his careful reading of this paper and for his assistance in provingLemma 3. References [1] Martin Barlow, Krzysztof Burdzy, Haya Kaspi, and Avi Mandelbaum. Coalescence of skewBrownian motions. In S´eminaire de Probabilit´es, XXXV , volume 1755 of Lecture Notes inMath. , pages 202–205. Springer, Berlin, 2001.[2] Martin Barlow, Jim Pitman, and Marc Yor. On Walsh’s Brownian motions. In S´eminairede Probabilit´es, XXIII , volume 1372 of Lecture Notes in Math. , pages 275–293. Springer,Berlin, 1989.[3] Krzysztof Burdzy and Zhen-Qing Chen. Local time flow related to skew Brownian motion. Ann. Probab. , 29(4):1693–1715, 2001.[4] Krzysztof Burdzy and Haya Kaspi. Lenses in skew Brownian flow. Ann. Probab. ,32(4):3085–3115, 2004.[5] Mark Freidlin and Shuenn-Jyi Sheu. Diffusion processes on graphs: stochastic differentialequations, large deviation principle. Probab. Theory Related Fields , 116(2):181–220, 2000.[6] Le Gall.J.F. Mouvement brownien Processus de Branchement et Superprocessus . 1994.Notes de cours de DEA paris 6.[7] Le Gall.J.F. Calcul stochastique et processus de Markov . 2010. Notes de cours de Master2 paris sud. 458] J. M. Harrison and L. A. Shepp. On skew Brownian motion. Ann. Probab. , 9(2):309–313,1981.[9] Hiroshi Kunita. Stochastic flows and stochastic differential equations , volume 24 of Cam-bridge Studies in Advanced Mathematics . Cambridge University Press, Cambridge, 1997.Reprint of the 1990 original.[10] Yves Le Jan and Olivier Raimond. Integration of Brownian vector fields. Ann. Probab. ,30(2):826–873, 2002.[11] Yves Le Jan and Olivier Raimond. Flows, coalescence and noise. Ann. Probab. , 32(2):1247–1315, 2004.[12] Yves Le Jan and Olivier Raimond. Flows associated to Tanaka’s SDE. ALEA Lat. Am.J. Probab. Math. Stat. , 1:21–34, 2006.[13] Antoine Lejay. On the constructions of the skew Brownian motion. Probab. Surv. , 3:413–466 (electronic), 2006.[14] Bertrand Micaux. Flots stochastiques d’op´erateurs dirig´es par des bruits Gaussiens etPoissonniens . 2007. Th`ese pr´esent´ee pour obtenir le grade de docteur en sciences del’universit´e Paris XI, sp´ecialit´e math´ematiques.[15] Daniel Revuz and Marc Yor. Continuous martingales and Brownian motion , volume 293of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathe-matical Sciences] . Springer-Verlag, Berlin, third edition, 1999.[16] B. Tsirelson. Triple points: from non-Brownian filtrations to harmonic measures. Geom.Funct. Anal. , 7(6):1096–1142, 1997.[17] J.B Walsh. A diffusion with discontinuous local time , volume 52 of Temps locauxAst´erisque . Soci´et´e Math´ematique de France, Paris, 1978.[18] S. Watanabe. The stochastic flow and the noise associated to Tanaka’s stochastic differ-ential equation.