aa r X i v : . [ m a t h . P R ] F e b INVERTING RAY-KNIGHT IDENTITY
CHRISTOPHE SABOT AND PIERRE TARRES
Abstract.
We provide a short proof of the Ray-Knight second generalized Theorem, us-ing a martingale which can be seen (on the positive quadrant) as the Radon-Nikodym de-rivative of the reversed vertex-reinforced jump process measure with respect to the Markovjump process with the same conductances. Next we show that a variant of this process pro-vides an inversion of that Ray-Knight identity. We give a similar result for the Ray-Knightfirst generalized Theorem. Introduction
Let G = ( V, E, ∼ ) be a nonoriented connected finite graph without loops, with conduc-tances ( W e ) e ∈ E ; define, for all x , y ∈ V , W x,y = W { x,y } x ∼ y .Let E and L be respectively the associated Markov generator and Dirichlet form definedby, for all f ∈ R V , Lf ( x ) = X y ∈ V W x,y ( f ( y ) − f ( x )) E ( f, f ) = 12 X x,y ∈ V W x,y ( f ( x ) − f ( y )) Let x ∈ V be a special point that will be fixed throughout the text. let U = V \ { x } ,and let P G,U be the unique probability on R V under which ( ϕ x ) x ∈ V is the centered Gaussianfield with covariance E G,U [ ϕ x ϕ y ] = g U ( x, y ) , where g U is the Green function killed outside U , in other words P G,U = 1(2 π ) | U | √ det G U exp (cid:26) − E ( ϕ, ϕ ) (cid:27) δ ( ϕ x ) Y x ∈ U dϕ x , where G U := g U ( ., . ) , and δ is the Dirac at 0 so that the integration is on ( ϕ x ) x ∈ U with ϕ x = 0 .Let P z be the law under which ( X t ) t > is a Markov Jump Process with conductances ( W e ) e ∈ E (i.e. jump rates W ij from i to j ∈ V ) starting at z at time , with right-continuouspaths and local times at x ∈ V , t > ,(1.1) ℓ x ( t ) = Z t { X u = x } du. Mathematics Subject Classification. primary 60J27, 60J55, secondary 60K35, 81T25, 81T60.This work was partly supported by the ANR project MEMEMO2, and the LABEX MILYON.The first author is grateful to DMA, ENS, for his hospitality and financial support while part of thiswork was done.
Let τ . be the right-continuous inverse of t → ℓ x ( t ) τ u = inf { t > ℓ x ( t ) > u } , for u > . Our first aim is to provide a short proof of the generalized second Ray-Knight theorem.
Theorem 1 (Generalized second Ray-Knight theorem, [10]) . For any u > , (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) x ∈ V under P x ⊗ P G,U , has the same law as (cid:18)
12 ( ϕ x + √ u ) (cid:19) x ∈ V under P G,U . This theorem is due to Eisenbaum, Kaspi, Marcus, Rosen and Shi [10] and is closelyrelated to Dynkin’s isomorphism; see [8] for a first relation between Ray-Knight theoremand Dynkin’s isomorphism, and [17] for an overview of the subject; see [15, 16] for relatedwork on the link between Markov loops and the Gaussian Free Field. Note that this resultplays a crucial rôle in a recent work of Ding, Lee and Peres [6] on cover times of discreteMarkov processes, and in the study of random interlacements, see for instance [20, 21, 22].In Section 2, we give our short proof of Theorem 1, independent of any reference to theVRJP. Similar results would hold for Dynkin and Eisenbaum isomorphism theorems (seeSection 5, and [17], [22]). We note that there is also a non-symmetric version of Dynkin’sisomorphism [13] (see also [9]), which our technique cannot provide as it is.In Section 3, we explain how the martingale appearing in this proof is related to theVertex reinforced jump process (VRJP). However, note that the proof does not need anyreference to the VRJP.This short proof in fact yields an identity that corresponds to an inversion of the Ray-Knight identity, proved in Section 4. Indeed, Theorem 1 gives an identity in law, but failsto give any information on the law of ( ℓ x ( τ u ) , ϕ x ) x ∈ V conditioned on (cid:0) ℓ x ( τ u ) + ϕ x (cid:1) x ∈ V . Weprovide below a process that describes this conditional law.Finally, Section 5 yields the equivalent inversion for the generalized first Ray-Knighttheorem.Let (Φ x ) x ∈ V be positive reals. As before, we fix the special point x ∈ V . We considerthe continuous time process ( ˇ Y s ) s > with state space V defined as follows. We set ˇ L i ( s ) = Φ i − Z s ˇ Y u = i du. At time s , we consider the Ising model ( σ x ) x ∈ V on G with interaction J i,j ( s ) = W i,j ˇ L i ( s ) ˇ L j ( s ) . and with boundary condition σ x = +1 . We denote by F ( s ) = X σ ∈{− , +1 } V \{ x } e P { i,j }∈ E J i,j ( s ) σ i σ j its partition function and by < · > s its associated expectation, so that for example < σ x > s = 1 F ( s ) X σ ∈{− , +1 } V \{ x } σ x e P { i,j }∈ E J i,j ( s ) σ i σ j > The process ˇ Y is then defined as the jump process which, conditioned on the past at time s , if Y s = i , jumps from i to j at a rate W i,j L j ( s ) < σ j > s < σ i > s . and stopped at the time S = sup { s > , ˇ L i ( s ) > for all i } . (1.2)Note, using the positivity of the Ising model (see for instance [23], proposition 7.1), that < σ x > s > for all s < S . Hence the process ˇ Y is defined up to time S .We denote by P ˇ Y Φ ,z the law of ˇ Y starting from z and initial condition Φ , and stopped attime S . (Note that this law also depends on the choice of the "special point" x ). Lemma 1.
Starting from any point z ∈ V , the process ˇ Y ends at x , i.e. S < ∞ and ˇ Y S = x , P ˇ Y Φ ,z a.s. This process provides an inversion to the second Ray-Knight identity, as stated in thefollowing theorem.
Theorem 2.
Let ℓ , ϕ , τ u be as in Theorem 1 and set Φ x = p ϕ x + 2 ℓ x ( τ u ) . Under P x ⊗ P G,U , we have L ( ϕ | Φ) law = ( σ ˇ L ( S )) , where ˇ L ( S ) is distributed under P ˇ Y Φ ,x and, conditionally on ˇ L ( S ) , σ is distributed accord-ing to the distribution of the Ising model with interaction J i,j ( S ) = W i,j ˇ L i ( S ) ˇ L j ( S ) withboundary condition σ x = +1 . Remark 1.
Once ϕ is known, then obviously ℓ ( τ u ) = (Φ − ϕ ) / is also known: in otherwords, Theorem 2 is equivalent to the more precise identity L (( ℓ ( τ u ) , ϕ ) | Φ) law = (cid:18)
12 (Φ − ˇ L ( S )) , σ ˇ L ( S ) (cid:19) , where ˇ L ( S ) and σ are distributed as in the statement of Theorem 2. The proof of Theorem 2 is given in Section 4. Theorem 2 is a consequence of a moreprecise statement, cf Theorem 5, which gives the law of ( X s ) s τ u conditionally on Φ . C. SABOT AND P. TARRES A New proof of Theorem 1
Let G be a positive measurable test function. Letting dϕ := δ ( ϕ x ) Q x ∈ U dϕ x and C =(2 π ) − | U | (det G U ) − / be the normalizing constant of the Gaussian free field, we get E x ⊗ P G,U (cid:20) G (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) x ∈ V (cid:21) (2.1) = C E x (cid:20)Z R U G (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) exp (cid:18) − E ( ϕ, ϕ ) (cid:19) dϕ (cid:21) = C E x X σ ∈{ +1 , − } V ,σ x =+1 Z R U + G (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) exp (cid:18) − E ( σϕ, σϕ ) (cid:19) dϕ where in the last equality we decompose the integral according to the possible signs of ϕ so that the sum is on σ ∈ { +1 , − } U with σ x = +1 . In the following we simply write P σ for this sum.The strategy is now to make the change of variables Φ = p ℓ ( τ u ) + ϕ . Given ℓ =( ℓ i ( t )) i ∈ V,t ∈ R + , let D u := { Φ ∈ R V + : Φ x = √ u, Φ x / > ℓ x ( τ u ) for all x ∈ V \ { x }} We first make the change of variables T u : R V + ∩ { ϕ x = 0 } → D u ϕ Φ = T u ( ϕ ) = (cid:18)q ℓ i ( τ u ) + ϕ i (cid:19) i ∈ V . which can be inverted by ϕ = p Φ i − ℓ ( τ u ) . This yields, letting d Φ := δ √ u (Φ x ) Q x ∈ U d Φ x , E x ⊗ P G,U (cid:20) G (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) x ∈ V (cid:21) = C E x "Z R U + X σ G (cid:18)(cid:18) Φ x (cid:19)(cid:19) exp (cid:18) − E ( σϕ, σϕ ) (cid:19) | Jac ( T − u )(Φ) | Φ ∈ D u d Φ = C E x "Z R U + X σ G (cid:18)(cid:18) Φ x (cid:19)(cid:19) exp (cid:18) − E ( σϕ, σϕ ) (cid:19) Y x ∈ U Φ x ϕ x ! Φ ∈ D u d Φ . (2.2)Note that the Jacobian is taken over all coordinates but x ( Φ x = √ u ); if Φ ∈ D u , then | Jac ( T − u )(Φ)) | = | Jac (Φ ϕ )(Φ)) | = Y x ∈ U Φ x ϕ x . Given Φ ∈ R V + such that Φ x = √ u , we define for all i ∈ V , T = inf (cid:26) t > ℓ i ( t ) = 12 Φ i for some i ∈ V (cid:27) (2.3) Φ i ( t ) = q Φ i − ℓ i ( t ) , t T so that in (2.2) we have ϕ = Φ( τ u ) . An important remark is that Φ ∈ D u ⇐⇒ X T = x ⇐⇒ T = τ u . (2.4)Finally, we define for a configuration of signs σ ∈ { +1 , − } V with σ x = +1 , M σ Φ t = exp (cid:26) − E ( σ Φ( t ) , σ Φ( t )) (cid:27) Q j = x σ j Φ j (0) Q j = X t σ j Φ j ( t ) . (2.5)From (2.2)–(2.4), we deduce E x ⊗ P G,U (cid:20) G (cid:18) ℓ ( τ u ) + 12 ϕ (cid:19)(cid:21) = C Z R U + G (cid:18)
12 Φ (cid:19) E x "X σ M σ Φ T { X T = x } d Φ . (2.6) Lemma 2.
For any Φ ∈ R V and σ ∈ {− , } V , the process ( M σ Φ t ∧ T ) t > is a uniformlyintegrable martingale.Proof. Consider the Markov process ( ℓ ( t ) , X ( t )) , which obviously has generator ˜ L ( g )( ℓ, x ) =( ∂∂ℓ x + L ) g ( ℓ, x ) . Let f be the function defined by f ( ℓ, x ) = (cid:16)Q y = x σ y p Φ y − ℓ y (cid:17) − . Notethat, if t < T , ddt E ( σ Φ( t ) , σ Φ( t )) = 2( σ Φ) X t ( t ) L ( σ Φ( t ))( X t ) = 2 Lff ( ℓ ( t ) , X t ) = 2 ˜ Lff ( ℓ ( t ) , X t ) , since f ( ℓ, x ) does not depend on ℓ x . Therefore, for t < T , M σ Φ t M σ Φ0 = f ( ℓ ( t ) , X ( t )) f (0 , x ) e − R t Lff ( ℓ ( s ) ,X ( s )) ds which implies that M σ Φ t ∧ T is a martingale, using for instance lemma 3.2 in [11] p.174-175.The condition that f is bounded in that result is not satisfied, but the proof remainstrue, noting that the following integrability conditions hold (see Problem 22 of Chapter 2,p.92 in [11]): first, using that ( ˜ Lf /f )( ℓ ( t ) , X ( t )) > − Cst ( W ) and ( | Lf | /f )( ℓ ( t ) , X ( t )) Cst ( W ) / Φ X t ( t ) , we deduce Z t ˜ Lff ( ℓ ( s ) , X s ) exp − Z s ˜ Lff ( ℓ ( u ) , X u ) du ! ds Cst ( W, Φ) X i ∈ V Z Φ i Φ i ( t ) ds √ s Cst (Φ , W, | V | ) . Second, f ( ℓ ( t ) , X ( t )) can be upper bounded by an integrable random variable, uniformlyin t . Indeed, let us consider the extension of process ( X t ) t ∈ R to R , and take the conventionthat the local times at all sites are at time .For all j ∈ V , let s j be the (possibly negative) local time at j , at the last jump from thatsite before reaching a local time Φ j / at that site. Let, for all j ∈ V , m j = Φ j / − s j . Then the random variables m j , j ∈ V , are independent exponential distributions withparameters W j = P k ∼ j W jk , since the sequence of local times of jumps from j is a PoissonPoint Process with intensity W j .Now, for all t ∈ R + , j = X t , Φ j ( t ∧ T ) > p m j , so that f ( ℓ ( t ) , X ( t )) Cst (Φ) Q j ∈ V m − / j . C. SABOT AND P. TARRES
This enables us to conclude, since Q j ∈ V m − / j is integrable, which also implies uniformintegrability of M t ∧ T , using that | M t ∧ T | Cst ( W, Φ , t ) f ( ℓ ( t ) , X ( t )) . (cid:3) Let us now consider the process N Φ t = X σ ∈{− , +1 } V \{ x } M σ Φ t (2.7) = X σ ∈{− , +1 } V \{ x } exp (cid:26) − E ( σ Φ( t ) , σ Φ( t )) (cid:27) Q j = x σ j Φ j (0) Q j = X t σ j Φ j ( t ) . Lemma 3.
For all x = x , we have N Φ T { X T = x } = 0 (2.8) Proof.
Let σ x be the spin-flip of σ at x : σ x = ǫ x σ with ǫ xy = − if y = x and if y = x . If x = x and X T = x , then M σ x Φ T = − M σ Φ T . Indeed, since Φ x ( T ) = 0 , then σ x Φ( T ) = σ Φ( T ) , and the minus sign comes from thenumerator of the product term in (2.5). By symmetry, the left-hand side of (2.8) is equalto X σ ∈{− , +1 } V \{ x } (cid:0) M σ Φ T + M σ x Φ T (cid:1) { X T = x } = 0 . (cid:3) It follows from Lemmas 2 and 3, by the optional stopping theorem (using the uniformintegrability of M t ∧ T ), that E x ⊗ P G,U (cid:20) G (cid:18) ℓ x ( τ u ) + 12 ϕ x (cid:19) x ∈ V (cid:21) = C Z R U + G (cid:18)(cid:18)
12 Φ x (cid:19) x ∈ V (cid:19) E x (cid:2) N ϕT { X T = x } (cid:3) d Φ= C Z R U + G (cid:18)
12 Φ (cid:19) E x [ N ϕT ] d Φ = C Z R U + G (cid:18)
12 Φ (cid:19) N ϕ d Φ= C Z R U + G (cid:18)
12 Φ (cid:19) X σ exp (cid:26) − E ( σ Φ , σ Φ) (cid:27)! d Φ= C Z R U G (cid:18)
12 Φ (cid:19) (cid:18) exp (cid:26) − E (Φ , Φ) (cid:27)(cid:19) d Φ , which concludes the proof of Theorem 1.3. Link with vertex-reinforced jump process
The aim of this section is to point out a link between the Ray-Knight identity and areversed version of the Vertex-Reinforced Jump Process (VRJP).It is organized as follows. In Subsection 3.1 we compute the Radon-Nykodim derivativeof the VRJP, which is similar to the martingale M in Section 2: the computation can be used in particular to provide a direct proof of exchangeability of VRJP. In Subsection 3.2we introduce a time-reversed version of the VRJP, i.e. where the process subtracts ratherthan adds local time at the site where it stays (see Definition 2). Then we show in Theorem4 that its Radon-Nykodim derivative is the martingale M σ in Section 2 with positive spins σ ≡ +1 . Note that the “magnetized” reversed VRJP, defined in Section 1 and related tothe inversion of Ray-Knight in Theorem 2, involves instead the sum N Φ , cf (2.7), of all themartingales M σ Φ .3.1. The vertex-reinforced jump process and its Radon-Nykodim derivative.Definition 1.
Given positive conductances on the edges of the graph ( W e ) e ∈ E and initialpositive local times ( ϕ i ) i ∈ V , the vertex-reinforced jump process (VRJP) is a continuous-timeprocess ( Y t ) t > on V , starting at time at some vertex z ∈ V and such that, if Y is at avertex i ∈ V at time t , then, conditionally on ( Y s , s t ) , the process jumps to a neighbour j of i at rate W i,j L j ( t ) , where L j ( t ) := ϕ j + Z t { Y s = j } ds. The Vertex-Reinforced Jump Process was initially proposed by Werner in 2000, firststudied by Davis and Volkov [4, 5], then Collevechio [2, 3], Basdevant and Singh [1], andSabot and Tarrès [19].Let D be the increasing functional D ( s ) = 12 X i ∈ V ( L i ( s ) − ϕ i ) , define the time-changed VRJP Z t = Y D − ( t ) . and let, for all i ∈ V and t > , ℓ Zi ( t ) be the local time of Z at time t . Lemma 4.
The inverse functional D − is given by D − ( t ) = X i ∈ V ( q ϕ i + 2 ℓ Zi ( t ) − ϕ i ) . Conditionally on the past at time t , the process Z jumps from Z t = i to a neighbour j atrate W i,j s ϕ j + 2 ℓ Zj ( t ) ϕ i + 2 ℓ Zi ( t ) . Proof.
The proof is elementary and already in [19] [Section 4.3, Proof of Theorem 2 ii)] ina slightly modified version, but we include it here for completeness. First note that, for all i ∈ V ,(3.1) ℓ Zi ( D ( s )) = ( L i ( s ) − ϕ i ) / , since ( ℓ Zi ( D ( s ))) ′ = D ′ ( s ) { Z D ( s ) = i } = L Y s ( s ) { Y s = i } . Hence ( D − ) ′ ( t ) = 1 D ′ ( D − ( t )) = 1 L Z t ( D − ( t )) = 1 q ϕ Z t + 2 ℓ ZZ t ( t ) , C. SABOT AND P. TARRES which yields the expression for D − . It remains to prove the last assertion: P ( Z t + dt = j |F t ) = P ( Y D − ( t + dt ) = j |F t )= W Z t ,j ( D − ) ′ ( t ) L j ( D − ( t )) dt = W Z t ,j s ϕ j + 2 ℓ Zj ( t ) ϕ Z t + 2 ℓ ZZ t ( t ) dt. (cid:3) Let P x ,t (resp. P Zϕ,x ,t ) be the distribution, starting from x and on the time interval [0 , t ] ,of the Markov Jump Process with conductances ( W e ) e ∈ E (resp. the time-changed VRJP ( Z t ) s ∈ [0 ,t ] with conductances ( W e ) e ∈ E and initial positive local times ( ϕ i ) i ∈ V ). Theorem 3.
The law of the time-changed VRJP Z on the interval [0 , t ] is absolutely contin-uous with respect to the law of the MJP X with rates W i,j , with Radon-Nikodym derivativegiven by d P Zϕ,x ,t d P x ,t = e (cid:16) E ( √ ϕ +2 ℓ ( t ) , √ ϕ +2 ℓ ( t )) −E ( ϕ,ϕ ) (cid:17) Q j = x ϕ j Q j = X t q ϕ j + 2 ℓ j ( t ) , where ℓ j ( t ) is the local time of X at time t and site j defined in (1.1) .Proof. In the proof, we write ℓ for the local time of both Z and X , since we consider Z and X on the canonical space with different probabilities. Let, for all ψ ∈ R V , i ∈ V , t > , F ( ψ ) = X { i,j }∈ E W ij ψ i ψ j , G i ( t ) = Y j = i ( ϕ j + 2 ℓ j ( t )) − / . First note that the probability, for the time-changed VRJP Z , of holding at a site v ∈ V on a time interval [ t , t ] is exp − Z t t X j ∼ Z t W Z t ,j q ϕ j + 2 ℓ j ( t ) q ϕ Z t + 2 ℓ Z t ( t ) dt = exp (cid:18) − Z t t d (cid:16) F ( p ϕ + 2 ℓ ( t )) (cid:17)(cid:19) . Second, conditionally on ( Z u , u t ) , the probability that Z jumps from Z t = i to j in thetime interval [ t, t + dt ] is W ij s ϕ j + 2 ℓ j ( t ) ϕ i + 2 ℓ i ( t ) dt = W ij G j ( t ) G i ( t ) dt. Therefore the probability that, at time t , Z has followed a path Z = x , x , . . . , Z t = x n with jump times respectively in [ t i , t i + dt i ] , i = 1 . . . n , where t = 0 < t < . . . < t n < t = t n +1 , is exp (cid:16) F ( ϕ ) − F ( p ϕ + 2 ℓ ( t )) (cid:17) n Y i =1 W x i − x i G x i ( t i ) G x i − ( t i ) dt i = exp (cid:16) F ( ϕ ) − F ( p ϕ + 2 ℓ ( t )) (cid:17) G X t ( t ) G x (0) n Y i =1 W x i − x i dt i , where we use that G x i ( t i ) = G x i ( t i +1 ) , since Z stays at site x i on the time interval [ t i , t i +1 ] . On the other hand, the probability that, at time t , X has followed the same path withjump times in the same intervals is exp − X i,j : j ∼ i W ij ℓ i ! n Y i =1 W x i − x i dt i , which concludes the proof. (cid:3) Note that Theorem 3 can be used to show exchangeability of the VRJP, and provides amartingale for the Markov Jump Process, similar to M Φ t in (2.5).Recall that it is shown in [19] that the time-changed VRJP ( Z t ) t > is a mixture of MarkovJump Processes, i.e. that there exist random variables ( U i ) i ∈ V ∈ H := { P ∈ V u i = 0 } with a sigma supersymmetric hyperbolic distribution with parameters ( W ij ϕ i ϕ j ) { i,j }∈ E (seeSection 6 of [19] and [7]) such that, conditionally on ( U i ) i ∈ V , Z t is a Markov jump processstarting from z , with jump rate from i to jW i,j e U j − U i . In particular, the discrete time process corresponding to the VRJP observed at jump timesis exchangeable, and is a mixture of reversible Markov chains with conductances W i,j e U i + U j .3.2. The reversed VRJP and its Radon-Nykodim derivative.Definition 2.
Given positive conductances on the edges of the graph ( W e ) e ∈ E and ini-tial positive local times (Φ i ) i ∈ V , the reversed vertex-reinforced jump process (RVRJP) is acontinuous-time process ( ˜ Y t ) t S , starting at time at some vertex i ∈ V such that, if ˜ Y is at a vertex i ∈ V at time t , then, conditionally on ( ˜ Y s , s t ) , the process jumps to aneighbour j of i at rate W i,j ˜ L j ( t ) , where ˜ L j ( t ) := Φ j − Z t { ˜ Y s = j } ds, defined up until the stopping time ˜ S where one of the local times hits , i.e. ˜ S = inf { t ∈ R : ˜ L j ( t ) = 0 for some j } . Similarly as for Y , let us define the increasing functional ˜ D ( s ) = 12 X i ∈ V (Φ i − ˜ L i ( s )) , define the time-changed VRJP ˜ Z t = ˜ Y ˜ D − ( t ) . and let, for all i ∈ V and t > , ℓ ˜ Zi ( t ) be the local time of ˜ Z at time t .Then, similarly as in Lemma 4, conditionally on the past at time t , ˜ Z jumps from ˜ Z t = i to a neighbour j at rate W i,j vuut Φ j − ℓ ˜ Zj ( t )Φ i − ℓ ˜ Zi ( t ) , and ˜ Z stops at time ˜ T = ˜ D ( ˜ S ) = inf (cid:26) t > ℓ ˜ Zi ( t ) = 12 Φ i for some i ∈ V (cid:27) . Let P ˜ Z Φ ,x ,t be the distribution of ( ˜ Z t ) t > on the time interval [0 , t ∧ ˜ T ] , starting from x and initial condition Φ .An easy adaptation of the proof of Theorem 3 shows Theorem 4.
The law of the time-reversed VRJP ˜ Z on the interval [0 , t ∧ ˜ T ] is absolutelycontinuous with respect to the law of the MJP X with rates W i,j , with Radon-Nikodymderivative given by d P ˜ Z Φ ,x ,t d P x ,t = e − (cid:16) E ( √ Φ − ℓ ( t ∧ T ) , √ Φ − ℓ ( t ∧ T )) −E (Φ , Φ) (cid:17) Q j = x Φ j Q j = X t ∧ T q Φ j − ℓ j ( t ∧ T ) , where ℓ j ( t ) (resp. T ) is the local time of X at time t and site j (resp. the stopping time)defined in (1.1) (resp. in (2.3) ). Hence, the Radon-Nikodym derivative of the time-reversed VRJP with respect to theMJP is the martingale that appears in the proof of Theorem 1, more precisely d P ˜ Z Φ ,x ,t d P x ,t = M Φ t ∧ T M Φ0 , with the notations of Section 2. Note that this Radon-Nikodym derivative involves themartingale M with positive spins σ ≡ +1 . The "magnetized" inverse VRJP, defined inSection 1 and related to the inversion of Ray-Knight in Theorem 2, involves the sum allthe martingales M σ Φ : this is the purpose of next section.4. Proof of lemma 1 and Theorem 2
The proofs of Lemma 1 and Theorem 2 rely on a time change of the process ˇ Y which isin fact the same time change as the one appearing in Section 3 for ˜ Y : let us define ˇ D ( s ) = 12 X i ∈ V (Φ i − ˇ L i ( s )) , define the time-changed VRJP ˇ Z t = ˇ Y ˇ D − ( t ) . and let, for all i ∈ V and t > , ℓ ˇ Zi ( t ) be the local time of ˇ Z at time t .Then, similarly to Lemma 4, conditionally on the past at time t , ˇ Z jumps from ˇ Z t = i to a neighbour j at rate W i,j s Φ j − ℓ ˇ Zj ( t )Φ i − ℓ ˇ Zi ( t ) < σ j > ( t ) < σ i > ( t ) , where we write < · > ( t ) for < · > D − ( t ) according to the notation of Section 1 : moreprecisely, < · > ( t ) is the expectation for the Ising model with interaction J i,j ( D − ( t )) = W i,j q Φ i − ℓ ˇ Zi ( t ) q Φ j − ℓ ˇ Zj ( t ) since the vectors of local times ℓ ˇ Z and ˇ L are related by the formula ℓ ˇ Z ( t ) = 12 (Φ − ˇ L ( D − ( t ))) . (4.1)Clearly, this process is well defined up to time ˇ T = ˇ D ( ˇ S ) = inf (cid:26) t > ℓ ˇ Zi ( t ) = 12 Φ i for some i ∈ V (cid:27) . Lemma 1 tells that ˇ Z ˇ T = x .We denote by P ˇ Z Φ ,z the law of the process ˇ Z starting from the initial condition Φ andinitial state z up to the time ˇ T (as for ˇ Y this law depends on the choice of x ).We now prove a more precise version of Theorem 2, giving a description of the conditionallaw of the full process. Theorem 5.
With the notations of Theorem 2, under P x ( ·| Φ) , ( (cid:0) X t ) t ∈ [0 ,τ u ] , ϕ (cid:1) has the lawof ( (cid:0) ( ˇ Z ( t )) t ∈ [0 ,T ] , σ Φ( T ) (cid:1) where ˇ Z is distributed under P ˇ Z Φ ,x and σ is distributed accordingto the Ising model with interation W i,j Φ i ( T )Φ j ( T ) . We will adopt the following notation Φ i ( t ) = q Φ i − l ˇ Zi ( t ) = ˇ L i ( D − ( t )) . (4.2)Recall that M ϕt , N Φ t and T are the processes (starting with the initial conditions ϕ and Φ ) and stopping times defined respectively in (2.5), (2.7) and (2.3), as a function of the pathof the Markov process X up to time t . The proof of Theorem 5 is based on the followinglemma. Lemma 5.
We have: (i)
For all t T , N Φ t = e P i ∈ V W i ( ℓ i ( t ) − Φ i ) F ( D − ( t )) < σ X t > ( t ) Q j = x Φ j (0) Q j = X t Φ j ( t ) ! , where F ( D − ( t )) (resp. < · > ( t ) ) corresponds to the partition function (resp. dis-tribution) of the Ising model with interaction J i,j ( D − ( t )) = J i,j Φ i ( t )Φ j ( t ) , and W i = P j ∼ i W i,j . (ii) N T = 0 if X T = x . (iii) Under P Z (the law of the MJP ( X t ) ), N Φ t ∧ T is a positive martingale, more precisely, N Φ t ∧ T /N Φ0 is the Radon-Nykodim derivative of the measure P ˇ Z Φ ,z with respect to thelaw of the MJP X starting from z and stopped at time T .Proof of Lemma 5. (i) We expand the squares in the energy term, which yields E ( σ Φ( t ) , σ Φ( t )) = − X { i,j }∈ E W i,j Φ i ( t )Φ j ( t ) σ i σ j + 12 X i ∈ V W i (Φ i − ℓ i ( t )) , and the statement follows easily. (ii) Same argument as in Lemma 3. This can also be seen from the expression in (i) sincein this case all the interactions between x and its neighbors vanish, indeed, J x,y ( T ) = 0 , using Φ x ( T ) = 0 . This implies that the pinning σ x = +1 has no effect on the spin σ x , andtherefore by symmetry that < σ x > ( T ) = 0 if X T = x = x . (iii) The fact that N Φ t is a martingale follows directly from the martingale property ofthe M σ Φ t , cf Lemma 2. It is also a consequence of the Radon-Nykodim property provedbelow. The fact that N Φ t is positive follows from the positive correlation in the Ising model: < σ x > ( t ) = < σ x σ x > ( t ) > , see for instance [23].The beginning of the proof follows the same line of ideas as in the proof of Theorem 3.Similarly, we set ˇ G i ( t ) = Y j = i j ( t ) = Y j = i q Φ j − ℓ ˇ Zj ( t ) , so that Φ j ( t )Φ i ( t ) = ˇ G i ( t )ˇ G j ( t ) . First note that the probability, for the time-changed process ˇ Z , of holding at a site v ∈ V on a time interval [ t , t ] is exp − Z t t X j ∼ ˇ Z u W ˇ Z u ,j Φ j ( u ) < σ j > ( u ) Φ ˇ Z u ( u ) < σ ˇ Z u > ( u ) du . Second, conditionally on ( ˇ Z u , u t ) , the probability that ˇ Z jumps from ˇ Z t = i to j in thetime interval [ t, t + dt ] is W ij Φ j ( t ) < σ j > ( t ) Φ i ( t ) < σ i > ( t ) dt. Therefore the probability that, at time t , ˇ Z has followed a path ˇ Z = x , x , . . . , ˇ Z t = x n with jump times respectively in [ t i , t i + dt i ] , i = 1 . . . n , where t = 0 < t < . . . < t n < t = t n +1 , with t ˇ T , is exp − Z t X j ∼ ˇ Z u W ˇ Z u ,j Φ j ( u ) < σ j > ( u ) Φ ˇ Z u ( u ) < σ ˇ Z u > ( u ) du n Y i =1 W x i − x i Φ x i ( t i ) < σ x i > ( t i ) Φ x i − ( t i ) < σ x i − > ( t i ) dt i = exp − Z t X j ∼ ˇ Z u W ˇ Z u ,j Φ j ( u ) < σ j > ( u ) Φ ˇ Z u ( u ) < σ ˇ Z u > ( u ) du n Y i =1 W x i − x i ˇ G x i − ( t i )ˇ G x i ( t i ) < σ x i > ( t i ) < σ x i − > ( t i ) dt i = exp − Z t X j ∼ ˇ Z u W ˇ Z u ,j Φ j ( u ) < σ j > ( u ) Φ ˇ Z u ( u ) < σ ˇ Z u > ( u ) du ˇ G x (0)ˇ G ˇ Z t ( t ) n Y i =1 W x i − x i < σ x i > ( t i ) < σ x i − > ( t i ) dt i where we use that ˇ G x i − ( t i − ) = ˇ G x i − ( t i ) , since Z stays at site x i − on the time interval [ t i − , t i ] . We now use that n Y i =1 < σ x i > ( t i ) < σ x i − > ( t i ) = < σ ˇ Z t > ( t ) < σ x > (0) n +1 Y i =1 < σ x i − > ( t i − ) < σ x i − > ( t i ) = < σ ˇ Z t > ( t ) exp − Z t ∂∂u < σ ˇ Z u > ( u ) < σ ˇ Z u > ( u ) du ! . Finally, set H ( t ) = F ( D − ( t )) = X σ ∈{− , +1 } V \{ x } exp X { i,j }∈ E W i,j Φ i ( t )Φ j ( t ) σ i σ j and, K ( t ) = X σ ∈{− , +1 } V \{ x } exp X { i,j }∈ E W i,j Φ i ( t )Φ j ( t ) σ i σ j σ ˇ Z t . We have < σ ˇ Z t > ( t ) = K ( t ) /H ( t ) , so that ∂∂u < σ ˇ Z u > ( u ) < σ ˇ Z u > ( u ) = ∂∂u K ( u ) K ( u ) − ∂∂u H ( u ) H ( u ) . Now, since ∂∂u X { i,j }∈ E W i,j Φ i ( u )Φ j ( u ) σ i σ j = − X j ∼ ˇ Z u W ˇ Z u ,j Φ j ( u )Φ ˇ Z u ( u ) σ ˇ Z u σ j , we have that ∂∂u K ( u ) K ( u ) = − X j ∼ ˇ Z u Φ j ( u )Φ ˇ Z u ( u ) < σ j > ( u ) < σ ˇ Z u > ( u ) These identities imply that the probability that, at time t , ˇ Z has followed a path ˇ Z = x , x , . . . , ˇ Z t = x n with jump times respectively in [ t i , t i + dt i ] , i = 1 . . . n , where t = 0
Let ψ (( X t ) t ∈ [0 ,τ u ] , ϕ ) notation = ψ ( X, ϕ ) and G (Φ) be test functions. Weare interested in the following expectation E x ⊗ P G,U ( ψ ( X, ϕ ) G (Φ)) = E x (cid:18)Z R V \{ x } ψ ( X, ϕ ) G (Φ) Ce − E ( ϕ,ϕ ) dϕ (cid:19) , (4.3)where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian freefield. Recall that Φ = p ϕ + 2 l ( τ u ) and set σ = sign ( ϕ ) . As in the proof of Theorem 1 wechange to variables Φ . Following the computation at the beginning of the proof of Theorem1 up to equation (2.6), we deduce that (4.3) is equal to C Z R V \{ x } + G (Φ) E x X σ ψ ( X, σ Φ( T )) M σ Φ T { X T = x } ! d Φ (4.4)If X T = x then, using that σ X t = σ x = 1 and the expansion in the proof of Lemma 5 (i) ,we deduce that M σ Φ T = N Φ T e P { i,j }∈ E W i,j Φ i ( T )Φ j ( T ) σ i σ j F ( D − ( T )) and, therefore, X σ ψ ( X, σ Φ( T )) M σ Φ T = N Φ T F ( D − ( T )) X σ ψ ( X, σ Φ( T )) e P { i,j }∈ E W i,j Φ i ( T )Φ j ( T ) σ i σ j = N Φ T < ψ ( X, σ Φ( T )) > ( T ) . This implies that (5 .
3) = C Z R V \{ x } + G (Φ) E x (cid:0) < ψ ( X, σ Φ( T )) > ( T ) N Φ T { X T = x } (cid:1) d Φ= C Z R V \{ x } + G (Φ) N Φ0 ˇ E V RJPx (cid:0) < ψ ( ˇ Z, σ Φ( T )) > ( T ) (cid:1) d Φ where in the last equality we used Lemma 5 (ii) - (iii) . Since N Φ0 = X σ ∈{− , +1 } V \{ x } exp (cid:26) − E ( σ Φ , σ Φ) (cid:27) , it implies that CN Φ0 is the density of Φ since by Theorem 1 we have Φ law = |√ u + ϕ | where ϕ has the law of the Gaussian free field P G,U . This exactly means that E x ⊗ P G,U ( ψ ( X, ϕ ) | Φ) = E ˇ Z Φ ,x (cid:0) < ψ ( ˇ Z, σ Φ( T )) > ( T ) (cid:1) . (cid:3) Proof of Theorem 2.
From Theorem 5, we know that conditionally on Φ , ( ℓ, ϕ ) has the lawof ( ℓ ( T ) , p Φ − ℓ ( T )) , where ℓ ( T ) is the local time of ˇ Z under P ˇ Z Φ ,x . If we change backto the process ˇ Y we have, using (4.1), ˇ L ( S ) = p Φ − ℓ ( T ) , hence, L (( ℓ, ϕ ) | Φ) is the law of ( (Φ − L ( S )) , L ( S )) for initial conditions (Φ , x ) . (cid:3) Inversion of the generalized first Ray-Knight theorem
We use the same notation as in the first section. The generalized first Ray-Knight theoremconcerns the local time of the Markov jump process starting at a point z = x , stopped atits first hitting time of x . Denote by H x = inf { t > , X t = x } , the first hitting time of x . Theorem 6.
For any z ∈ V and any s > , (cid:18) ℓ x ( H x ) + 12 ( ϕ x + s ) (cid:19) x ∈ V under P z ⊗ P G,U , has the same "law" as (cid:18)
12 ( ϕ x + s ) (cid:19) x ∈ V under (1 + ϕ z s ) P G,U . Remark 2.
This theorem is in general stated for s = 0 , but obviously we do not loosegenerality by restricting to s > . This formally means that for any test function g ,(5.1) Z g (cid:18) ( ℓ x ( H x ) + 12 ( ϕ x + s ) ) x ∈ V (cid:19) d P z ⊗ P G,U = Z g (cid:18) ( 12 ( ϕ x + s ) ) x ∈ V (cid:19) (1 + ϕ z s ) dP G,U . Remark that the measure (1+ ϕ z s ) P G,U has mass 1 (since ϕ z is centered) but is not positive.In fact, since the integrand depends only on | ϕ x + s | , x ∈ V , everything can be written interms of a positive measure. Indeed, if σ x = sign ( ϕ x + s ) , then conditionally on | ϕ x + s | , x ∈ V , σ has the law of an Ising model with interaction J i,j = W i,j | ϕ i + s || ϕ j + s | andboundary condition σ x = +1 . This implies that the right hand side of (5.1) can be writtenequivalently as Z g (cid:18) ( 12 ( ϕ x + s ) ) x ∈ V (cid:19) < σ z >s | s + ϕ z | dP G,U where < σ z > denotes the expectation of σ z with respect to the Ising model describedabove. Since σ x = +1 , we have that <σ z >s > , and <σ z >s | s + ϕ z | dP G,U is a probabilitymeasure.We give now a counterpart of Theorem 2 for the generalized first Ray-Knight theorem.Consider the process ˇ Y defined in Section 1, starting from a point z . Denote by ˇ H x thefirst hitting time of x by the process ˇ Y .Obviously, Lemma 1 implies the following Lemma 6. Lemma 6.
Almost surely ˇ H x S , where S is defined in (1.2). Theorem 7.
With the notation of Theorem 6, let Φ z = p ℓ z ( H x ) + ( ϕ z + s ) . Under P z ⊗ P G,U , we have L ( ϕ + s | Φ) law = ( σ ˇ L ( ˇ H x )) , where ˇ L ( ˇ H x ) is distributed under P ˇ Z Φ ,z and, conditionally on ˇ L ( ˇ H x ) , σ is distributed ac-cording to the distribution of the Ising model with interaction J i,j ( ˇ H x ) = W i,j ˇ L i ( ˇ H x ) ˇ L j ( ˇ H x ) and boundary condition σ x = +1 . Similarly as for the generalized second Ray-Knight theorem, Theorem 7 is a consequenceof the following more precise result. Let us consider, as in Section 4, the time-changedversion ˇ Z of the process ˇ Y . Theorem 8.
With the notation of Theorem 7, under P z ( ·| Φ) , ( (cid:0) X t ) t ∈ [0 ,H x ] , ϕ + s (cid:1) hasthe law of ( (cid:16) ( ˇ Z ( t )) t ∈ [0 , ˇ H x ] , σ Φ( ˇ H x ) (cid:17) where ˇ Z is distributed under P ˇ Z Φ ,z , ˇ H x is the firsthitting time of x by ˇ Z , and σ is distributed according to the Ising model with interation W i,j Φ i ( ˇ H x )Φ j ( ˇ H x ) and boundary condition σ x = +1 .Proof. We only sketch the proof since it is very similar to the proof of Theorem 5. Let ψ (( X t ) t ∈ [0 ,H x ] , ϕ + s ) notation = ψ ( X, ϕ + s ) and G (Φ) be positive test functions. We areinterested in the following expectation E z ⊗ P G,U ( ψ ( X, ϕ + s ) G (Φ)) = E z (cid:18)Z R V \{ x } ψ ( X, ϕ + s ) G (Φ) Ce − E ( ϕ,ϕ ) dϕ (cid:19) (5.2)where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian freefield. Recall that Φ = p ( ϕ + s ) + 2 ℓ ( H x ) , set σ = sign ( ϕ + s ) and define T ′ = S ∧ H x . As in the proof of Theorem 1 we change to variables Φ . An easy adaptation of the compu-tation in the proof of Theorem 1 up to equation (2.6) yields that (5.2) is equal to C Z R V \{ x } + G (Φ) E z X σ ψ ( X, σ Φ( T ′ )) M σ Φ T ′ { X T ′ = x } ! d Φ , (5.3)As in the proof of Theorem 5 we have that, if X T ′ = x , then X σ ψ ( X, σ Φ( T ′ )) M σ Φ T ′ = N Φ T ′ F ( D − ( t )) X σ ψ ( X, σ Φ( T ′ )) e P { i,j }∈ E W i,j Φ i ( T ′ )Φ j ( T ′ ) σ i σ j = N Φ T ′ < ψ ( X, σ Φ( T ′ )) > ( T ′ ) . This implies that (5 .
3) = C Z R V \{ x } + G (Φ) E z (cid:0) < ψ ( X, σ Φ( T ′ )) > ( T ′ ) N Φ T ′ X T ′ = x (cid:1) d Φ= C Z R V \{ x } + G (Φ) N Φ0 E ˇ Z Φ ,z (cid:0) < ψ ( ˇ Z, σ Φ( T ′ )) > ( T ′ ) (cid:1) d Φ , using in the last equality an easy adaptation of Lemma 5 (ii)-(iii) for time T ′ . Now N Φ0 = X σ ∈{− , +1 } V \{ z } (cid:18) σ z Φ z s (cid:19) exp (cid:26) − E ( σ Φ , σ Φ) (cid:27) , which implies that CN Φ0 is the density of Φ since by Theorem 6 we have Φ law = | ϕ + s | under (1 + ϕ z s ) P G,U . This exactly means that E z ⊗ P G,U ( ψ ( X, ϕ ) | Φ) = E ˇ Z Φ ,z (cid:0) < ψ ( ˇ Z, σ Φ( T ′ )) > ( T ′ ) (cid:1) . (cid:3) Acknowledgment.
We are grateful to Alain-Sol Sznitman and Jay Rosen for several usefulcomments on a first version of the manuscript. We thank also Yuval Peres for interestingdiscussions.
References [1] A-L. Basdevant and A. Singh. Continuous time vertex reinforced jump processes on Galton-Watsontrees.
Preprint, available on http://arxiv.org/abs/1005.3607 , 2010.[2] Andrea Collevecchio. On the transience of processes defined on Galton-Watson trees.
Ann. Probab. ,34(3):870–878, 2006.[3] Andrea Collevecchio. Limit theorems for vertex-reinforced jump processes on regular trees.
Electron.J. Probab. , 14:no. 66, 1936–1962, 2009.[4] Burgess Davis and Stanislav Volkov. Continuous time vertex-reinforced jump processes.
Probab. TheoryRelated Fields , 123(2):281–300, 2002.[5] Burgess Davis and Stanislav Volkov. Vertex-reinforced jump processes on trees and finite graphs.
Probab. Theory Related Fields , 128(1):42–62, 2004.[6] Jian Ding, James R. Lee, and Yuval Peres. Cover times, blanket times, and majorizing measures.
Ann.of Math. (2) , 175(3):1409–1471, 2012.[7] M. Disertori, T. Spencer, and M. R. Zirnbauer. Quasi-diffusion in a 3D supersymmetric hyperbolicsigma model.
Comm. Math. Phys. , 300(2):435–486, 2010.[8] Nathalie Eisenbaum. Dynkin’s isomorphism theorem and the Ray-Knight theorems.
Probab. TheoryRelated Fields , 99(2):321–335, 1994.[9] Nathalie Eisenbaum and Haya Kaspi. On permanental processes.
Stochastic Process. Appl. ,119(5):1401–1415, 2009.[10] Nathalie Eisenbaum, Haya Kaspi, Michael B. Marcus, Jay Rosen, and Zhan Shi. A Ray-Knight theoremfor symmetric Markov processes.
Ann. Probab. , 28(4):1781–1796, 2000.[11] Stewart N. Ethier and Thomas G. Kurtz.
Markov processes . Wiley Series in Probability and Mathe-matical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986.Characterization and convergence.[12] F. B. Knight. Random walks and a sojourn density process of Brownian motion.
Trans. Amer. Math.Soc. , 109:56–86, 1963.[13] Yves Le Jan. Dynkin’s isomorphism without symmetry.
Preprint, available onhttp://arxiv.org/pdf/math/0610571.pdf , 2006.[14] Yves Le Jan. Markov loops and renormalization.
Ann. Probab. , 38(3):1280–1319, 2010.[15] Yves Le Jan.
Markov paths, loops and fields , volume 2026 of
Lecture Notes in Mathematics . Springer,Heidelberg, 2011. Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, Écoled’Été de Probabilités de Saint-Flour. [Saint-Flour Probability Summer School].[16] T. Lupu. From loop clusters and random interlacement to the free field.
Preprint, available onhttp://arxiv.org/abs/1402.0298 , 2014.[17] Michael B. Marcus and Jay Rosen.
Markov processes, Gaussian processes, and local times , volume 100of
Cambridge Studies in Advanced Mathematics . Cambridge University Press, Cambridge, 2006.[18] Daniel Ray. Sojourn times of diffusion processes.
Illinois J. Math. , 7:615–630, 1963. [19] C. Sabot and Tarrès. Edge-reinforced random walk, vertex-reinforced jump process and the supersym-metric hyperbolic sigma model.
Preprint, available on http://arxiv.org/abs/1111.3991 , 2012.[20] Alain-Sol Sznitman. An isomorphism theorem for random interlacements.
Electron. Commun. Probab. ,17:no. 9, 9, 2012.[21] Alain-Sol Sznitman. Random interlacements and the Gaussian free field.
Ann. Probab. , 40(6):2400–2438, 2012.[22] Alain-Sol Sznitman.
Topics in occupation times and Gaussian free fields . Zurich Lectures in AdvancedMathematics. European Mathematical Society (EMS), Zürich, 2012.[23] Wendelin Werner.
Percolation et modèle d’Ising , volume 16 of
Cours Spécialisés [Specialized Courses] .Société Mathématique de France, Paris, 2009.
Université de Lyon, Université Lyon 1, Institut Camille Jordan, CNRS UMR 5208, 43,Boulevard du 11 novembre 1918, 69622 Villeurbanne Cedex, France
E-mail address : [email protected] Ceremade, CNRS UMR 7534 and Université Paris-Dauphine, Place de Lattre de Tassigny,75775 Paris Cedex 16, France.
E-mail address ::