Inverting the coupling of the signed Gausssian free field with a loop soup
aa r X i v : . [ m a t h . P R ] J a n INVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREEFIELD WITH A LOOP SOUP
TITUS LUPU, CHRISTOPHE SABOT, AND PIERRE TARR`ES
Abstract.
Lupu introduced a coupling between a random walk loop-soup and a Gaussianfree field, where the sign of the field is constant on each cluster of loops. This coupling is asigned version of isomorphism theorems relating the square of the GFF to the occupation fieldof Markovian trajectories. His construction starts with a loop-soup, and by adding additionalrandomness samples a GFF out of it. In this article we provide the inverse construction:starting from a signed free field and using a self-interacting random walk related to this field,we construct a random walk loop-soup. Our construction relies on the previous work by Sabotand Tarr`es, which inverts the coupling from the square of the GFF rather than the signedGFF itself. As a consequence, we also deduce an inversion of the coupling between the randomcurrent and the FK-Ising random cluster models introduced by Lupu and Werner. Introduction
Let G = ( V, E ) be a connected undirected graph, with V at most countable and each vertex x ∈ V of finite degree. We do not allow self-loops, however the edges might be multiple. Given e ∈ E an edge, we will denote e + and e − its end-vertices, even though e is non-oriented andone can interchange e + and e − . Each edge e ∈ E is endowed with a conductance W e > κ = ( κ x ) x ∈ V on vertices.We consider ( X t ) t ≥ the Markov jump processes on V which being in x ∈ V , jumps alongan adjacent edge e with rate W e . Moreover if κ x = 0, the process is killed at x with rate κ x (the process is not defined after that time). ζ will denote the time up to which X t is defined.If ζ < + ∞ , then either the process has been killed by the killing measure κ (and κ
0) or ithas gone off to infinity in finite time (and V infinite). We will assume that the process X istransient, which means, if V is finite, that κ P x will denote the law of X started from x .Let ( G ( x, y )) x,y ∈ V be the Green function of X t : G ( x, y ) = G ( y, x ) = E x (cid:20)Z ζ { X t = y } dt (cid:21) . Let E be the Dirichlet form defined on functions f on V with finite support: E ( f, ) = X x ∈ V κ x f ( x ) + X e ∈ E W e ( f ( e + ) − f ( e − )) . (1.1) P ϕ will be the law of ( ϕ x ) x ∈ V the centred Gaussian free field (GFF) on V with covariance E ϕ [ ϕ x ϕ y ] = G ( x, y ). In case V is finite, the density of P ϕ is1(2 π ) | V | √ det G exp (cid:18) − E ( f, f ) (cid:19) Y x ∈ V df x . Mathematics Subject Classification. primary 60J27; 60J55; secondary 60K35; 82B20; 81T25; 81T60.
Key words and phrases.
Gaussian free field; Ray-Knight identity; self-interacting processes; loop-soups; ran-dom currents; Ising model.
Given U a finite subset of V , and f a function on U , P U,fϕ will denote the law of the GFF ϕ conditioned to equal f on U . ( ℓ x ( t )) x ∈ V,t ∈ [0 ,ζ ] will denote the family of local times of X : ℓ x ( t ) = Z t { X s = x } ds. For all x ∈ V , u >
0, let τ xu = inf { t ≥ ℓ x ( t ) > u } . Recall the generalized second Ray-Knight theorem on discrete graphs by Eisenbaum, Kaspi,Marcus, Rosen and Shi [2] (see also [8, 10]):
Generalized second Ray-Knight theorem.
For any u > and x ∈ V , (cid:18) ℓ x ( τ x u ) + 12 ϕ x (cid:19) x ∈ V under P x ( ·| τ x u < ζ ) ⊗ P { x } , ϕ has the same law as (cid:18) ϕ x (cid:19) x ∈ V under P { x } , √ uϕ . Sabot and Tarr`es showed in [9] that the so-called “magnetized” reverse Vertex-ReinforcedJump Process provides an inversion of the generalized second Ray-Knight theorem, in the sensethat it enables to retrieve the law of ( ℓ x ( τ x u ) , ϕ x ) x ∈ V conditioned on (cid:0) ℓ x ( τ x u ) + ϕ x (cid:1) x ∈ V . Thejump rates of that latter process can be interpreted as the two-point functions of the Isingmodel associated to the time-evolving weights.However in [9] the link with the Ising model is only implicit, and a natural question iswhether Ray-Knight inversion can be described in a simpler form if we enlarge the state spaceof the dynamics, and in particular include the “hidden” spin variables.The answer is positive, and goes through an extension of the Ray-Knight isomorphismintroduced by Lupu [6], which couples the sign of the GFF to the path of the Markov chain.The Ray-Knight inversion will turn out to take a rather simple form in Theorem 3 of thepresent paper, where it will be defined not only through the spin variables but also randomcurrents associated to the field though an extra Poisson Point Process.The paper is organised as follows.In Section 2 we recall some background on loop soup isomorphisms and on related couplingsand state and prove a signed version of generalized second Ray-Knight theorem. We begin inSection 2.1 by a statement of Le Jan’s isomorphism which couples the square of the GaussianFree Field to the loop soups, and recall how the generalized second Ray-Knight theorem canbe seen as its Corollary: for more details see [4]. In Subsection 2.2 we state Lupu’s isomor-phism which extends Le Jan’s isomorphism and couples the sign of the GFF to the loop soups,using a cable graph extension of the GFF and Markov Chain. Lupu’s isomorphism yieldsan interesting realisation of the well-known FK-Ising coupling, and provides as well a “Cur-rent+Bernoulli=FK” coupling lemma [7], which occur in the relationship between the discreteand cable graph versions. We briefly recall those couplings in Sections 2.3 and 2.4, as they areimplicit in this paper. In Section 2.5 we state and prove the generalized second Ray-Knight“version” of Lupu’s isomorphism, which we aim to invert.Section 3 is devoted to the statements of inversions of those isomorphisms. We state inSection 3.1 a signed version of the inversion of the generalized second Ray-Knight theoremthrough an extra Poisson Point Process, namely Theorem 3. In Section 3.2 we provide adiscrete-time description of the process, whereas in Section 3.3 we yield an alternative versionof that process through jump rates, which can be seen as an annealed version of the first one. NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 3
We deduce a signed inversion of Le Jan’s isomorphism for loop soups in Section 3.4, and aninversion of the coupling of random current with FK-Ising in Section 3.5.Finally Section 4 is devoted to the proof of Theorem 3: Section 4.1 deals with the case of afinite graph without killing measure, and Section 4.2 deduces the proof in the general case.2.
Le Jan’s and Lupu’s isomorphisms
Loop soups and Le Jan’s isomorphism.
The loop measure associated to the Markovjump process ( X t ) ≤ t<ζ is defined as follows. Let P tx,y be the bridge probability measure from x to y in time t (conditionned on t < ζ ). Let p t ( x, y ) be the transition probabilities of ( X t ) ≤ t<ζ .Let µ loop be the measure on time-parametrised nearest-neighbour based loops (i.e. loopswith a starting site) µ loop = X x ∈ V Z t> P tx,x p t ( x, x ) dtt . The loops will be considered here up to a rotation of parametrisation (with the correspondingpushforward measure induced by µ loop ), that is to say a loop ( γ ( t )) ≤ t ≤ t γ will be the same as( γ ( T + t )) ≤ t ≤ t γ − T ◦ ( γ ( T + t − t γ )) t γ − T ≤ t ≤ t γ , where ◦ denotes the concatenation of paths. A loop soup of intensity α >
0, denoted L α , is a Poisson random measure of intensity αµ loop . Wesee it as a random collection of loops in G . Observe that a.s. above each vertex x ∈ V , L α contains infinitely many trivial ”loops” reduced to the vertex x . There are also with positiveprobability non-trivial loop that visit several vertices.Let L . ( L α ) be the occupation field of L α on V i.e., for all x ∈ V , L x ( L α ) = X ( γ ( t )) ≤ t ≤ tγ ∈L α Z t γ { γ ( t )= x } dt. In [3] Le Jan shows that for transient Markov jump processes, L x ( L α ) < + ∞ for all x ∈ V a.s.For α = he identifies the law of L . ( L α ): Le Jan’s isomorphism. L . ( L / ) = (cid:0) L x ( L / ) (cid:1) x ∈ V has the same law as ϕ = (cid:18) ϕ x (cid:19) x ∈ V under P ϕ . Let us briefly recall how Le Jan’s isomorphism enables one to retrieve the generalized secondRay-Knight theorem stated in Section 1: for more details, see for instance [4]. We assume that κ is supported by x : the general case can be dealt with by an argument similar to the proof ofProposition 4.6. Let D = V \ { x } , and note that the isomorphism in particular implies that L . ( L / ) conditionally on L x ( L / ) = u has the same law as ϕ / ϕ x / u .On the one hand, given the classical energy decomposition, we have ϕ = ϕ D + ϕ x , with ϕ D the GFF associated to the restriction of E to D , where ϕ D and ϕ x are independent. Now ϕ / ϕ x / u has the law of ( ϕ D + η √ u ) /
2, where η is the sign of ϕ x , which isindependent of ϕ D . But ϕ D is symmetric, so that the latter also has the law of ( ϕ D + √ u ) / L / can be decomposed into the two independent loopsoups L D / contained in D and L ( x )1 / hitting x . Now L . ( L D / ) has the law of ( ϕ D ) / L . ( L ( x )1 / ) conditionally on L x ( L ( x )1 / ) = u has the law of the occupation field of the Markovchain ℓ ( τ x u ) under P x ( ·| τ x u < ζ ), which enables us to conclude. TITUS LUPU, CHRISTOPHE SABOT, AND PIERRE TARR`ES
Lupu’s isomorphism.
As in [6], we consider the metric graph ˜ G associated to G . Eachedge e is replaced by a continuous line of length W − e .The GFF ϕ on G with law P ϕ can be extended to a GFF ˜ ϕ on ˜ G as follows. Given e ∈ E , oneconsiders inside e a conditionally independent Brownian bridge, actually a bridge of a √ × standard Brownian motion , of length W − e , with end-values ϕ e − and ϕ e + . This provides acontinuous field on the metric graph which satisfies the spatial Markov property.Similarly one can define a standard Brownian motion ( B ˜ G ) ≤ t ≤ ˜ ζ on ˜ G , whose trace on G indexed by the local times at V has the same law as the Markov process ( X t ) t ≥ on V withjump rate W e to an adjacent edge e up to time ζ , as explained in Section 2 of [6]. One canassociate a measure on time-parametrized continuous loops ˜ µ , and let ˜ L be the Poisson PointProcess of loops of intensity ˜ µ/
2: the discrete-time loops L can be obtained from ˜ L bytaking the print of the latter on V .Lupu introduced in [6] an isomorphism linking the GFF ˜ ϕ and the loop soup ˜ L on ˜ G . Theorem 1 (Lupu’s Isomorphism,[6]) . There is a coupling between the Poisson ensemble ofloops ˜ L and ( ˜ ϕ y ) y ∈ ˜ G defined above, such that the two following constraints hold: • For all y ∈ ˜ G , L y ( ˜ L ) = ˜ ϕ y • The clusters of loops of ˜ L are exactly the sign clusters of ( ˜ ϕ y ) y ∈ ˜ G .Conditionally on ( | ˜ ϕ y | ) y ∈ ˜ G , the sign of ˜ ϕ on each of its connected components is distributedindependently and uniformly in {− , +1 } . Lupu’s isomorphism and the idea of using metric graphs were applied in [5] to show that onthe discrete half-plane Z × N , the scaling limits of outermost boundaries of clusters of loops inloop soups are the Conformal Loop Ensembles CLE.Let O ( ˜ ϕ ) (resp. O ( ˜ L )) be the set of edges e ∈ E such that ˜ ϕ (resp. ˜ L ) does not touch 0on e , in other words such that all the edge e remains in the same sign cluster of ˜ ϕ (resp. ˜ L ).Let O ( L ) be the set of edges e ∈ E that are crossed (i.e. visited consecutively) by the traceof the loops L on V .In order to translate Lupu’s isomorphism back onto the initial graph G , one needs to describeon one hand the distribution of O ( ˜ ϕ ) conditionally on the values of ϕ , and on the other hand thedistribution of O ( ˜ L ) conditionally on L and the cluster of loops O ( L ) on the discrete graph G . These two distributions are described respectively in Subsections 2.3 and 2.4, and providerealisations of the FK-Ising coupling and the “Current+Bernoulli=FK” coupling lemma [7].2.3. The FK-Ising distribution of O ( ˜ ϕ ) conditionally on | ϕ | .Lemma 2.1. Conditionally on ( ϕ x ) x ∈ V , (1 e ∈O ( ˜ ϕ ) ) e ∈ E is a family of independent random vari-ables and P ( e
6∈ O ( ˜ ϕ ) | ϕ ) = (cid:26) if ϕ e − ϕ e + < , exp (cid:0) − W e ϕ e − ϕ e + (cid:1) if ϕ e − ϕ e + > . Proof.
Conditionally on ( ϕ x ) x ∈ V , are constructed as independent Brownian bridges on eachedge, so that (1 e ∈O ( ˜ ϕ ) ) e ∈ E are independent random variables, and it follows from the reflectionprinciple that, if ϕ e − ϕ e + >
0, then P ( e
6∈ O ( ˜ ϕ ) | ϕ ) = exp (cid:0) − W e ( ϕ e − + ϕ e + ) (cid:1) exp (cid:0) − W e ( ϕ e − − ϕ e + ) (cid:1) = exp (cid:0) − W e ϕ e − ϕ e + (cid:1) . (cid:3) NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 5
Let us now recall how the conditional probability in Lemma 2.1 yields a realisation of theFK-Ising coupling.Assume V is finite. Let ( J e ) e ∈ E be a family of positive weights. An Ising model on V withinteraction constants ( J e ) e ∈ E is a probability on configuration of spins ( σ x ) x ∈ V ∈ { +1 , − } V such that P Isg J (( σ x ) x ∈ V ) = 1 Z Isg J exp X e ∈ E J e σ e + σ e − ! . An FK-Ising random cluster model with weights (1 − e − J e ) e ∈ E is a random configuration ofopen (value 1) and closed edges (value 0) such that P FK − Isg J (( ω e ) e ∈ E ) = 1 Z FK − Isg J ♯ clusters Y e ∈ E (1 − e − J e ) ω e ( e − J e ) − ω e , where ” ♯ clusters” denotes the number of clusters created by open edges.The well-known FK-Ising and Ising coupling reads as follows. Proposition 2.2 (FK-Ising and Ising coupling) . Given an FK-Ising model, sample on eachcluster an independent uniformly distributed spin. The spins are then distributed accordingto the Ising model. Conversely, given a spins configuration ˆ σ following the Ising distribution,consider each edge e , such that ˆ σ e − ˆ σ e + < , closed, and each edge e , such that ˆ σ e − ˆ σ e + > open with probability − e − J e . Then the open edges are distributed according to the FK-Isingmodel. The two couplings between FK-Ising and Ising are the same. Consider the GFF ϕ on G distributed according to P ϕ . Let J e ( | ϕ | ) be the random interactionconstants J e ( | ϕ | ) = W e | ϕ e − ϕ e + | . Conditionally on | ϕ | , (sign( ϕ x )) x ∈ V follows an Ising distribution with interaction constants( J e ( | ϕ | )) e ∈ E : indeed, the Dirichlet form (1.1) can be written as E ( ϕ, ϕ ) = X x ∈ V κ x ϕ ( x ) + X x ∈ V ( ϕ ( x )) ( X y ∼ x W x,y ) − X e ∈ E J e ( | ϕ | ) sign( ϕ ( e + )) sign( ϕ ( e − )) . Similarly, when ϕ ∼ P { x } , √ uϕ has boundary condition √ u ≥ x , then (sign( ϕ x )) x ∈ V hasan Ising distribution with interaction ( J e ( | ϕ | )) e ∈ E and conditioned on σ x = +1.Now, conditionally on ϕ , O ( ˜ ϕ ) has FK-Ising distribution with weights (1 − e − J e ( | ϕ | ) ) e ∈ E .Indeed, the probability for e ∈ O ( ˜ ϕ ) conditionally on ϕ is 1 − e − J e ( | ϕ | ) , by Lemma 2.1, as inProposition 2.2.Note that, given that O ( ˜ ϕ ) has FK-Ising distribution, the fact that the sign of on its con-nected components is distributed independently and uniformly in {− , } can be seen eitheras a consequence of Proposition 2.2, or from Theorem 1.Given ϕ = ( ϕ x ) x ∈ V on the discrete graph G , we introduce in Definition 2.1 as the randomset of edges which has the distribution of O ( ˜ ϕ ) conditionally on ϕ = ( ϕ x ) x ∈ V . Definition 2.1.
We let O ( ϕ ) be a random set of edges which has the distribution of O ( ˜ ϕ ) conditionally on ϕ = ( ϕ x ) x ∈ V given by Lemma 2.1. Distribution of O ( ˜ L ) conditionally on L . The distribution of O ( ˜ L ) conditionallyon L can be retrieved by Corollary 3.6 in [6], which reads as follows. TITUS LUPU, CHRISTOPHE SABOT, AND PIERRE TARR`ES
Lemma 2.3 (Corollary 3.6 in [6]) . Conditionally on L , the events e
6∈ O ( ˜ L ) , e ∈ E \O ( L ) ,are independent and have probability (2.1) exp (cid:16) − W e q L e + ( L ) L e − ( L ) (cid:17) . This result gives rise, together with Theorem 1, to the following discrete version of Lupu’sisomorphism, which is stated without any recourse to the cable graph induced by G . Definition 2.2.
Let ( ω e ) e ∈ E ∈ { , } E be a percolation defined as follows: conditionally on L , the random variables ( ω e ) e ∈ E are independent, and ω e equals with conditional probabilitygiven by (2.1) .Let O + ( L ) the set of edges: O + ( L ) = O ( L ) ∪ { e ∈ E | ω e = 1 } . Proposition 2.4 (Discrete version of Lupu’s isomorphism, Theorem 1 bis in [6]) . Given aloop soup L , let O + ( L ) be as in Definition 2.2. Let ( σ x ) x ∈ V ∈ {− , +1 } V be random spinstaking constant values on clusters induced by O + ( L ) ( σ e − = σ e + if e ∈ O + ( L ) ) and suchthat the values on each cluster, conditional on L and O + ( L ) , are independent and uniformlydistributed. Then (cid:16) σ x q L x ( L ) (cid:17) x ∈ V is a Gaussian free field distributed according to P ϕ . Proposition 2.4 induces the following coupling between FK-Ising and random currents.If V is finite, a random current model on G with weights ( J e ) e ∈ E is a random assignment toeach edge e of a non-negative integer ˆ n e such that for all x ∈ V , X e adjacent to x ˆ n e is even, which is called the parity condition . The probability of a configuration ( n e ) e ∈ E satis-fying the parity condition is P RC J ( ∀ e ∈ E, ˆ n e = n e ) = 1 Z RC J Y e ∈ E ( J e ) n e n e ! , where actually Z RC J = Z Isg J . Let O (ˆ n ) = { e ∈ E | ˆ n e > } . The open edges in O (ˆ n ) induce clusters on the graph G .Given a loop soup L α , we denote by N e ( L α ) the number of times the loops in L α cross thenonoriented edge e ∈ E . The transience of the Markov jump process X implies that N e ( L α )is a.s. finite for all e ∈ E . If α = , we have the following identity (see for instance [11]): Loop soup and random current.
Assume V is finite and consider the loop soup L . Con-ditionally on the occupation field ( L x ( L )) x ∈ V , ( N e ( L )) e ∈ E is distributed as a random currentmodel with weights (cid:18) W e q L e − ( L ) L e + ( L ) (cid:19) e ∈ E . If ϕ is the GFF on G given by Le Jan’s orLupu’s isomorphism, then these weights are ( J e ( | ϕ | )) . NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 7
Conditionally on the occupation field ( L x ( L )) x ∈ V , O ( L ) are the edges occupied by arandom current and O + ( L ) the edges occupied by FK-Ising. Lemma 2.1 and Proposition 2.4imply the following coupling, as noted by Lupu and Werner in [7]. Proposition 2.5 (Random current and FK-Ising coupling, [7]) . Assume V is finite. Let ˆ n be a random current on G with weights ( J e ) e ∈ E . Let ( ω e ) e ∈ E ∈ { , } E be an independentpercolation, each edge being opened (value ) independently with probability − e − J e . Then O (ˆ n ) ∪ { e ∈ E | ω e = 1 } is distributed like the open edges in an FK-Ising with weights (1 − e − J e ) e ∈ E . Generalized second Ray-Knight “version” of Lupu’s isomorphism.
We are nowin a position to state the coupled version of the second Ray-Knight theorem.
Theorem 2.
Let x ∈ V . Let ( ϕ (0) x ) x ∈ V with distribution P { x } , ϕ , and define O ( ϕ (0) ) as inDefinition 2.1. Let X be an independent Markov jump process started from x .Fix u > . If τ x u < ζ , we let O u be the random subset of E which contains O ( ϕ (0) x ) , theedges used by the path ( X t ) ≤ t ≤ τ x u , and additional edges e opened conditionally independentlywith probability − e W e | ϕ (0) e − ϕ (0) e + |− W e q ( ϕ (0)2 e − +2 ℓ e − ( τ x u ))( ϕ (0)2 e + +2 ℓ e + ( τ x u )) . We let σ ∈ {− , +1 } V be random spins sampled uniformly independently on each clusterinduced by O u , pinned at x , i.e. σ x = 1 , and define ϕ ( u ) x := σ x q ϕ (0)2 x + 2 ℓ x ( τ x u ) . Then, conditionally on τ x u < ζ , ϕ ( u ) has distribution P { x } , √ uϕ , and O u has distribution O ( ϕ ( u ) ) conditionally on ϕ ( u ) . Remark 2.6.
One consequence of that coupling is that the path ( X s ) s ≤ τ x u stays in the positiveconnected component of x for ϕ ( u ) . This yields a coupling between the range of the Markovchain and the sign component of x inside a GFF P { x } , √ uϕ .Proof of Theorem 2: The proof is based on [6]. Let D = V \ { x } , and let ˜ L be the loopsoup of intensity 1 / G , which we decompose into ˜ L ( x ) (resp. ˜ L D ) the loopsoup hitting (resp. not hitting) x , which are independent. We let L and L ( x ) (resp. L D ) bethe prints of these loop soups on V (resp. on D = V \ { x } ). We condition on L x ( L ) = u .Theorem 1 implies (recall also Definition 2.1) that we can couple ˜ L D with ϕ (0) so that L x ( L D ) = ϕ (0)2 x / x ∈ V , and O ( ˜ L ) = O ( ϕ (0) ).Define ϕ ( u ) = ( ϕ ( u ) x ) x ∈ V from ˜ L by, for all x ∈ V , | ϕ ( u ) x | = q L x ( L )and ϕ ( u ) x = σ x | ϕ ( u ) x | , where σ ∈ {− , +1 } V are random spins sampled uniformly independentlyon each cluster induced by O ( ˜ L ), pinned at x , i.e. σ x = 1. Then, by Theorem 1, ϕ ( u ) hasdistribution P { x } , √ uϕ . TITUS LUPU, CHRISTOPHE SABOT, AND PIERRE TARR`ES
For all x ∈ V , we have L x ( ˜ L ) = ϕ (0)2 x L x ( L ( x ) ) . On the other hand, conditionally on L . ( L ), P ( e
6∈ O ( ˜ L ) | e
6∈ O ( ˜ L D ) ∪ O ( L )) = P ( e
6∈ O ( ˜ L )) P ( e
6∈ O ( ˜ L D ) ∪ O ( L )) = P ( e
6∈ O ( ˜ L ) | e
6∈ O ( L )) P ( e
6∈ O ( ˜ L D ) | e
6∈ O ( L ))= P ( e
6∈ O ( ˜ L ) | e
6∈ O ( L )) P ( e
6∈ O ( ˜ L D ) | e
6∈ O ( L D )) = exp (cid:18) − W e q L e − ( L ) L e + ( L ) + W e q L e − ( L D ) L e + ( L D ) (cid:19) , where we use in the third equality that the event e
6∈ O ( ˜ L D ) is measurable with respect to the σ -field generated by ˜ L D , which is independent of ˜ L ( x ) , and where we use Lemma 2.3 in thefourth equality, for ˜ L and for ˜ L D .We conclude the proof by observing that L ( x ) conditionally on L x ( L ( x ) ) = u has the lawof the occupation field of the Markov chain ℓ ( τ x u ) under P x ( ·| τ x u < ζ ). (cid:3) Inversion of the signed isomorphism
In [9], Sabot and Tarr`es give a new proof of the generalized second Ray-Knight theoremtogether with a construction that inverts the coupling between the square of a GFF conditionedby its value at a vertex x and the excursions of the jump process X from and to x . In thispaper we are interested in inverting the coupling of Theorem 2 with the signed GFF : moreprecisely, we want to describe the law of ( X t ) ≤ t ≤ τ x u conditionally on ϕ ( u ) .We present in section 3.1 an inversion involving an extra Poisson process. We providein Section 3.2 a discrete-time description of the process and in Section 3.3 an alternativedescription via jump rates. Sections 3.4 and 3.5 are respectively dedicated to a signed inversionof Le Jan’s isomorphism for loop soups, and to an inversion of the coupling of random currentwith FK-Ising.3.1. A description via an extra Poisson point process.
Let ( ˇ ϕ x ) x ∈ V be a real functionon V such that ˇ ϕ x = + √ u for some u >
0. SetˇΦ x = | ˇ ϕ x | , σ x = sign( ˇ ϕ x ) . We define a self-interacting process ( ˇ X t , (ˇ n e ( t )) e ∈ E ) living on V × N E as follows. The processˇ X starts at ˇ X (0) = x . For t ≥
0, we setˇΦ x ( t ) = q ( ˇΦ x ) − ℓ x ( t ) , ∀ x ∈ V, and J e ( ˇΦ( t )) = W e ˇΦ e − ( t ) ˇΦ e + ( t ) , ∀ e ∈ E. where ˇ ℓ x ( t ) = R t { ˇ X s = x } ds is the local time of the process ˇ X up to time t . Let ( N e ( u )) u ≥ bean independent Poisson Point Processes on R + with intensity 1, for each edge e ∈ E . We setˇ n e ( t ) = ( N e (2 J e ( t )) , if σ e − σ e + = +1 , , if σ e − σ e + = − . NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 9
We also denote by ˇ C ( t ) ⊂ E the configuration of edges such that ˇ n e ( t ) >
0. As time increases,the interaction parameters J e ( ˇΦ( t )) decreases for the edges neighboring ˇ X t , and at some ran-dom times ˇ n e ( t ) may drop by 1. The process ( ˇ X t ) t ≥ is defined as the process that jumps onlyat the times when one of the ˇ n e ( t ) drops by 1, as follows: • if ˇ n e ( t ) decreases by 1 at time t , but does not create a new cluster in ˇ C t , then ˇ X t crossesthe edge e with probability 1 / / • if ˇ n e ( t ) decreases by 1 at time t , and does create a new cluster in ˇ C t , then ˇ X t moves/orstays with probability 1 on the unique extremity of e which is in the cluster of theorigin x in the new configuration.We set ˇ T := inf { t ≥ , ∃ x ∈ V, s. t. ˇΦ x ( t ) = 0 } , clearly, the process is well-defined up to time ˇ T . Proposition 3.1.
For all ≤ t ≤ ˇ T , ˇ X t is in the connected component of x of the configu-ration ˇ C ( t ) . If V is finite, the process ends at x , i.e. ˇ X ˇ T = x . Theorem 3.
Assume that V is finite. With the notation of Theorem 2, conditioned on ϕ ( u ) =ˇ ϕ , ( X t ) t ≤ τ x u has the law of ( ˇ X ˇ T − t ) ≤ t ≤ ˇ T .Moreover, conditioned on ϕ ( u ) = ˇ ϕ , ( ϕ (0) , O ( ϕ (0) )) has the law of ( σ ′ x ˇΦ x ( ˇ T ) , ˇ C ( ˇ T )) where ( σ ′ x ) x ∈ V ∈ {− , +1 } V are random spins sampled uniformly independently on each cluster in-duced by ˇ C ( ˇ T ) , with the condition that σ ′ x = +1 .If V is infinite, then P { x } , √ uϕ -a.s., ˇ X t (with the initial condition ˇ ϕ = ϕ ( u ) ) ends at x , i.e. ˇ T < + ∞ and ˇ X ˇ T = x . All previous conclusions for the finite case still hold. Discrete time description of the process.
We give a discrete time description of theprocess ( ˇ X t , (ˇ n e ( t )) e ∈ E ) that appears in the previous section. Let t = 0 and 0 < t < · · · < t j be the stopping times when one of the stacks n e ( t ) decreases by 1, where t j is the time whenone of the stacks is completely depleted. It is elementary to check the following: Proposition 3.2.
The discrete time process ( ˇ X t i , (ˇ n e ( t i )) e ∈ E ) ≤ i ≤ j is a stopped Markov pro-cess. The transition from time i − to i is the following: • first chose e an edge adjacent to the vertex ˇ X t i − according to a probability proportionalto ˇ n e ( t i − ) , • decrease the stack ˇ n e ( t i − ) by 1, • if decreasing ˇ n e ( t i − ) by 1 does not create a new cluster in ˇ C t i − , then ˇ X t i − crosses theedge e with probability / or does not move with probability / , • if decreasing ˇ n e ( t i − ) by 1 does create a new cluster in ˇ C t i − , then ˇ X t i − moves/or stayswith probability 1 on the unique extremity of e which is in the cluster of the origin x in the new configuration. An alternative description via jump rates.
We provide an alternative description ofthe process ( ˇ X t , ˇ C ( t )) that appears in Section 3.1. Proposition 3.3.
The process ( ˇ X t , ˇ C ( t )) defined in section 3.1 can be alternatively describedby its jump rates : conditionally on its past at time t , if ˇ X t = x , y ∼ x and { x, y } ∈ ˇ C ( t ) , then (1) ˇ X jumps to y without modification of ˇ C ( t ) at rate W x,y ˇΦ y ( t )ˇΦ x ( t ) (2) the edge { x, y } is closed in ˇ C ( t ) at rate W x,y ˇΦ y ( t )ˇΦ x ( t ) (cid:16) e W x,y ˇΦ x ( t )ˇΦ y ( t ) − (cid:17) − and, conditionally on that last event:- if y is connected to x in the configuration ˇ C ( t ) \{ x, y } , then ˇ X simultaneously jumpsto y with probability / and stays at x with probability / - otherwise ˇ X t moves/or stays with probability 1 on the unique extremity of { x, y } which is in the cluster of the origin x in the new configuration. Remark 3.4.
It is clear from this description that the joint process ( ˇ X t , ˇ C ( t ) , ˇΦ( t )) is Markovprocess, and well defined up to the time ˇ T := inf { t ≥ ∃ x ∈ V, s.t. ˇΦ x ( t ) = 0 } . Remark 3.5.
One can also retrieve the process in Section 3.1 from the representation inProposition 3.3 as follows. Consider the representation of Proposition 3.3 on the graph whereeach edge e is replaced by a large number N of parallel edges with conductance W e /N . Considernow ˇ n ( N ) x,y ( t ) the number of parallel edges that are open in the configuration ˇ C ( t ) between x and y . Then, when N → ∞ , (ˇ n ( N ) ( t )) t ≥ , converges in law to (ˇ n ( t )) t ≥ , defined in section 3.1.Proof of Proposition 3.3: Assume ˇ X t = x , fix y ∼ x and let e = { x, y } . Recall that { x, y } ∈ ˇ C ( t )iff ˇ n e ( t ) ≥ P (cid:0) ˇ X jumps to y on time interval [ t, t + ∆ t ] without modification of ˇ C ( t ) | { x, y } ∈ ˇ C ( t ) (cid:1) = 12 P (ˇ n e ( t ) − ˇ n e ( t + ∆ t ) = 1 , ˇ n e ( t + ∆ t ) ≥ | ˇ n e ( t ) ≥ J e ( t ) − J e ( t + ∆ t )) + o (∆ t ) = W xy ˇΦ y ( t )ˇΦ x ( t ) ∆ t + o (∆ t ) . Similarly, (2) follows from the following computation: P (cid:0) { x, y } closed in ˇ C ( t + ∆ t ) | { x, y } ∈ ˇ C ( t ) (cid:1) = P ( n e ( t + ∆ t ) = 0 | ˇ n e ( t ) ≥ P (ˇ n e ( t ) = 1 , ˇ n e ( t + ∆ t ) = 0) P (ˇ n e ( t ) ≥
1) = e − J e ( t ) J e ( t )1 − e − J e ( t ) ( J e ( t ) − J e ( t + ∆ t )) + o (∆ t ) (cid:3) We easily deduce from the Proposition 3.3 and Theorem 6 the following alternative inversionof the coupling in Theorem 2.
Theorem 4.
With the notation of Theorem 2, conditionally on ( ϕ ( u ) , O u ) , ( X t ) t ≤ τ x u has thelaw of self-interacting process ( ˇ X ˇ T − t ) ≤ t ≤ ˇ T defined by jump rates of Proposition 3.3 startingwith ˇΦ x = q ( ϕ (0) x ) + 2 ℓ x ( τ x u ) and ˇ C (0) = O u . Moreover ( ϕ (0) , O ( ϕ (0) )) has the same law as ( σ ′ ˇΦ( T ) , ˇ C ( ˇ T )) where ( σ ′ x ) x ∈ V is a configurationof signs obtained by picking a sign at random independently on each connected component of ˇ C ( T ) , with the condition that the component of x has a + sign. NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 11
A signed version of Le Jan’s isomorphism for loop soup.
Let us first recall howthe loops in L α are connected to the excursions of the jump process X . Proposition 3.6 (From excursions to loops) . Let α > and x ∈ V . L x ( L α ) is distributedaccording to a Gamma Γ( α, G ( x , x )) law, where G is the Green’s function. Let u > , andconsider the path ( X t ) ≤ t ≤ τ x u conditioned on τ x u < ζ . Let ( Y j ) j ≥ be an independent Poisson-Dirichlet partition P D (0 , α ) of [0 , . Let S = 0 and S j = j X i =1 Y i . Let τ j = τ x uS j . Consider the family of paths (cid:0) ( X τ j − + t ) ≤ t ≤ τ j − τ j − (cid:1) j ≥ . It is a countable family of loops rooted in x . It has the same law as the family of all the loopsin L α that visit x , conditioned on L x ( L α ) = u . Next we describe how to invert the discrete version fo Lupu’s isomorphism Proposition 2.4for the loop-soup in the same way as in Theorem 3.Let ( ˇ ϕ x ) x ∈ V be a real function on V such that ˇ ϕ x = + √ u for some u >
0. SetˇΦ x = | ˇ ϕ x | , σ x = sign( ˇ ϕ x ) . Let ( x i ) ≤ i ≤| V | be an enumeration of V (which may be infinite). We define by induction theself interacting processes (( ˇ X i,t ) ≤ i ≤| V | , (ˇ n e ( t )) e ∈ E ). ˇ T i will denote the end-time for ˇ X i,t , andˇ T + i = P ≤ j ≤ i ˇ T j . By definition, ˇ T +0 = 0. L ( t ) will denote L x ( t ) := X ≤ i ≤| V | ˇ ℓ x ( i, ∨ ( t − ˇ T + i )) , where ˇ ℓ x ( i, t ) are the occupation times for ˇ X i,t . For t ≥
0, we setˇΦ x ( t ) = q ( ˇΦ x ) − L x ( t ) , ∀ x ∈ V, and J e ( ˇΦ( t )) = W e ˇΦ e − ( t ) ˇΦ e + ( t ) , ∀ e ∈ E. The end-times ˇ T i are defined by inductions asˇ T i = inf { t ≥ | ˇΦ ˇ X i,t ( t + ˇ T + i − ) = 0 } . Let ( N e ( u )) u ≥ be independent Poisson Point Processes on R + with intensity 1, for each edge e ∈ E . We set ˇ n e ( t ) = ( N e (2 J e ( t )) , if σ e − σ e + = +1 , , if σ e − σ e + = − . We also denote by ˇ C ( t ) ⊂ E the configuration of edges such that ˇ n e ( t ) >
0. ˇ X i,t starts at x i .For t ∈ [ ˇ T + i − , ˇ T + i ], • if ˇ n e ( t ) decreases by 1 at time t , but does not create a new cluster in ˇ C t , then ˇ X i,t − ˇ T + i − crosses the edge e with probability 1 / / • if ˇ n e ( t ) decreases by 1 at time t , and does create a new cluster in ˇ C t , then ˇ X i,t − ˇ T + i − moves/or stays with probability 1 on the unique extremity of e which is in the clusterof the origin x i in the new configuration.By induction, using Theorem 3, we deduce the following: Theorem 5.
Let ϕ be a GFF on G with the law P ϕ . If one sets ˇ ϕ = ϕ in the precedingconstruction, then for all i ∈ { , . . . , | V |} , ˇ T i < + ∞ , ˇ X i, ˇ T i = x i and the path ( ˇ X i,t ) t ≤ ˇ T i hasthe same law as a concatenation in x i of all the loops in a loop-soup L / that visit x i , butnone of the x , . . . , x i − . To retrieve the loops out of each path ( ˇ X i,t ) t ≤ ˇ T i , on has to partition itaccording to a Poisson-Dirichlet partition as in Proposition 3.6. The coupling between the GFF ϕ and the loop-soup obtained from (( ˇ X i,t ) ≤ i ≤| V | , (ˇ n e ( t )) e ∈ E ) is the same as in Proposition 2.4. Inverting the coupling of random current with FK-Ising.
By combining Theorem5 and the discrete time description of Section 3.2, and by conditionning on the occupationfield of the loop-soup, one deduces an inversion of the coupling of Proposition 2.5 between therandom current and FK-Ising.We consider that the graph G = ( V, E ) and that the edges are endowed with weights ( J e ) e ∈ E .Let ( x i ) ≤ i ≤| V | be an enumeration of V . Let ˇ C (0) be a subset of open edges of E . Let (ˇ n e (0)) e ∈ E be a family of random integers such that ˇ n e (0) = 0 if e ˇ C (0), and (ˇ n e (0) − e ∈ ˇ C (0) areindependent Poisson random variables, where E [ˇ n e (0) −
1] = 2 J e .We will consider a family of discrete time self-interacting processes (( ˇ X i,j ) ≤ i ≤| V | , (ˇ n e ( j )) e ∈ E ).ˇ X i,j starts at j = 0 at x i and is defined up to a integer time ˇ T i . Let ˇ T + i = P ≤ k ≤ i ˇ T k , withˇ T +0 = 0. The end-times ˇ T i are defined by induction asˇ T i = inf n j ≥ (cid:12)(cid:12)(cid:12) X e edge adjacent to ˇ X i,j ˇ n e ( j + ˇ T + i − ) = 0 o . For j ≥
1, ˇ C ( j ) will denote ˇ C ( j ) = { e ∈ E | ˇ n e ( j ) ≥ } , which is consistent with the notation ˇ C (0).The evolution is the following. For j ∈ { ˇ T + i − + 1 , . . . , ˇ T + i } , the transition from time j − j is the following: • first chose an edge e adjacent to the vertex ˇ X i,j − − ˇ T + i − with probability proportionalto ˇ n e ( j − • decrease the stack ˇ n e ( j −
1) by 1, • if decreasing ˇ n e ( j −
1) by 1 does not create a new cluster in ˇ C ( j − X i, · crosses e with probability 1 / / • if decreasing ˇ n e ( j −
1) by 1 does create a new cluster in ˇ C ( j − X i, · moves/orstays with probability 1 on the unique extremity of e which is in the cluster of theorigin x i in the new configuration.Denote ˆ n e the number of times the edge e has been crossed, in both directions, by all thewalks (( ˇ X i,j ) ≤ j ≤ ˇ T i ) ≤ i ≤| V | . Proposition 3.7.
A.s., for all i ∈ { , . . . , | V |} , ˇ T i < + ∞ and ˇ X i, ˇ T i = x i . If the initialconfiguration of open edges ˇ C (0) is random and follows an FK-Ising distribution with weights (1 − e − J e ) e ∈ E , then the family of integers (ˆ n e ) e ∈ E is distributed like a random current withweights ( J e ) e ∈ E . Moreover, the coupling between the random current and the FK-Ising obtainedthis way is the same as the one given by Proposition 2.5. Proof of theorem 3
Case of finite graph without killing measure.
Here we will assume that V is finiteand that the killing measure κ ≡ NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 13
In order to prove Theorem 3, we first enlarge the state space of the process ( X t ) t ≥ . Wedefine a process ( X t , ( n e ( t ))) t ≥ living on the space V × N E as follows. Let ϕ (0) ∼ P { x } , ϕ bea GFF pinned at x . Let σ x = sign( ϕ (0) x ) be the signs of the GFF with the convention that σ x = +1. The process ( X t ) t ≥ is as usual the Markov Jump process starting at x with jumprates ( W e ). We set(4.1) Φ x = | ϕ (0) x | , Φ( t ) = p Φ x + 2 ℓ x ( t ) , ∀ x ∈ V, J e (Φ( t )) = W e Φ e − ( t )Φ e + ( t ) , ∀ e ∈ E. The initial values ( n e (0)) are choosen independently on each edge with distribution n e (0) ∼ ( , if σ e − σ e + = − P (2 J e (Φ)) , if σ e − σ e + = +1where P (2 J e (Φ)) is a Poisson random variable with parameter 2 J e (Φ). Let (( N e ( u )) u ≥ ) e ∈ E be independent Poisson point processes on R + with intensity 1. We define the process ( n e ( t ))by n e ( t ) = n e (0) + N e ( J e (Φ( t ))) − N e ( J e (Φ)) + K e ( t ) , where K e ( t ) is the number of crossings of the edge e by the Markov jump process X beforetime t . Remark 4.1.
Note that compared to the process defined in Section 3.1, the speed of the Poissonprocess is related to J e (Φ( t )) and not J e (Φ( t )) . We will use the following notation C ( t ) = { e ∈ E, n e ( t ) > } . Recall that τ x u = inf { t ≥ , ℓ x ( t ) = u } for u >
0. To simplify notation, we will write τ u for τ x u in the sequel. We define ϕ ( u ) by ϕ ( u ) x = σ x Φ( τ u ) , ∀ x ∈ V, where ( σ x ) x ∈ V ∈ {− , +1 } V are random spins sampled uniformly independently on each clusterinduced by ˇ C ( ˇ T ) with the condition that σ x = +1. Lemma 4.2.
The random vector ( ϕ (0) , C (0) , ϕ ( u ) , C ( τ x u )) thus defined has the same distributionas ( ϕ (0) , O ( ϕ (0) ) , ϕ ( u ) , O u ) defined in Theorem 2.Proof. It is clear from construction, that C (0) has the same law as O ( ϕ (0) ) (cf Definition 2.1),the FK-Ising configuration coupled with the signs of ϕ (0) as in Proposition 2.2. Indeed, for eachedge e ∈ E such that ϕ (0) e − ϕ (0) e + >
0, the probability that n e (0) > − e − J e (Φ) . Moreover,conditionally on C (0) = O ( ϕ (0) ), C ( τ x u ) has the same law as O u defined in Theorem 2. Indeed, C ( τ x u ) is the union of the set C (0), the set of edges crossed by the process ( X u ) u ≤ τ x u , and theadditional edges such that N e ( J e ( τ x u )) − N e ( J e (Φ)) >
0. Clearly N e ( J e ( τ x u )) − N ( J e (Φ)) > − e − ( J e (Φ( τ x u )) − J e (Φ)) which coincides with the probabilitygiven in Theorem 2. (cid:3) We will prove the following theorem that, together with Lemma 4.2, contains the statementsof both Theorem 2 and 3.
Theorem 6.
The random vector ϕ ( u ) x is a GFF distributed according to P { x } , √ uϕ . Moreover,conditionally on ϕ ( u ) x = ˇ ϕ , the process ( X t , ( n e ( t )) e ∈ E ) t ≤ τ x u has the law of the process ( ˇ X ˇ T − t , (ˇ n e ( ˇ T − t )) e ∈ E ) t ≤ ˇ T described in section 3.1. Proof.
Step 1 :
We start by a simple lemma.
Lemma 4.3.
The distribution of (Φ := | ϕ (0) | , n e (0)) is given by the following formula for anybounded measurable test function h E ( h (Φ , n (0))) = X ( n e ) ∈ N E Z R V \{ x } + d Φ h (Φ , n ) e − P x ∈ V W x (Φ x ) − P e ∈ E J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! C ( n e ) − . where the integral is on the set { (Φ x ) x ∈ V , Φ x > ∀ x = x , Φ x = 0 } and d Φ = Q x ∈ V \{ x } d Φ x √ π | V |− and C ( n ) is the number of clusters induced by the edges such that n e > .Proof. Indeed, by construction, summing on possible signs of ϕ (0) , we have E ( h (Φ , n (0)))= X σ x X n ≪ σ x Z R V \{ x } + d Φ h (Φ , n ) e − E ( σ Φ) Y e ∈ E, σ e + σ e − =+1 e − J e (Φ) (2 J e (Φ)) n e n e ! . (4.2)where the first sum is on the set { σ x ∈ { +1 , − } V , σ x = +1 } and the second sum is on theset of { ( n e ) ∈ N E , n e = 0 if σ e − σ e + = − } (we write n ≪ σ to mean that n e vanishes on theedges such that σ e − σ e + = − E ( σ Φ) = 12 X x ∈ V W x (Φ x ) − X e ∈ E J e (Φ) σ e − σ e + . = 12 X x ∈ V W x (Φ x ) + X e ∈ E J e (Φ) − X e ∈ Eσ e − σ e + =+1 J e (Φ) , we deduce that the integrand in (4.2) is equal to h (Φ , n ) e − E ( σ Φ) Y e ∈ E, σ e + σ e − =+1 e − J e (Φ) (2 J e (Φ)) n e n e ! = h (Φ , n ) e − E ( σ Φ) e − P e ∈ E, σe + σe − =+1 J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! = h (Φ , n ) e − P x ∈ V W x (Φ x ) − P e ∈ E J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! where we used in the first equality that n e = 0 on the edges such that σ e + σ e − = −
1. Thus, E ( h (Φ , n (0)))= X σ x X n e ≪ σ x Z R V \{ x } + d Φ h (Φ , n ) e − P x ∈ V W x (Φ x ) − P e ∈ E J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! . Inverting the sum on σ and n and summing on the number of possible signs which are constanton clusters induced by the configuration of edges { e ∈ E, n e > } , we deduce Lemma 4.3. (cid:3) Step 2 :
We denote by Z t = ( X t , Φ( t ) , n e ( t )) the process defined previously and by E x , Φ ,n its law with initial condition ( x , Φ , n ). NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 15
We now introduce a process ˜ Z t , which is a ”time reversal” of the process Z t . This processwill be related to the process defined in section 3.1 in Step 4, Lemma 4.5.For (˜ n e ) ∈ N E and ( ˜Φ x ) x ∈ V such that˜Φ x = u, ˜Φ x > , ∀ x = x , we define the process ˜ Z t = ( ˜ X t , ˜Φ( t ) , ˜ n e ( t )) with values in V × R V + × Z E as follows. The process( ˜ X t ) is a Markov jump process with jump rates ( W e ) (so that ˜ X law = X ), and ˜Φ( t ), ˜ n e ( t ) aredefined by ˜Φ x ( t ) = q ˜Φ x − ℓ x ( t ) , ∀ x ∈ V, (4.3)where (˜ ℓ x ( t )) is the local time of the process ˇ X up to time t ,˜ n e ( t ) = ˜ n e − (cid:16) N e ( J e ( ˜Φ)) − N e ( J e ( ˜Φ( t ))) (cid:17) − ˜ K e ( t )(4.4)where (( N e ( u )) u ≥ ) e ∈ E are independent Poisson point process on R + with intensity 1 for eachedge e , and ˜ K e ( t ) is the number of crossings of the edge e by the process ˜ X before time t . Weset ˜ Z t = ( ˜ X t , ( ˜Φ x ( t )) , (˜ n e ( t ))) , (4.5)This process is well-defined up to time˜ T = inf n t ≥ , ∃ x ∈ V ˜Φ x ( t ) = 0 o . We denote by ˜ E x , ˜Φ , ˜ n its law. Clearly ˜ Z t = ( ˜ X t , ˜Φ( t ) , ˜ n e ( t )) is a Markov process, we will lateron make explicit its generator.We have the following change of variable lemma. Lemma 4.4.
For all bounded measurable test functions
F, G, H X ( n e ) ∈ N E Z d Φ F (Φ , n ) E x , Φ ,n (cid:16) G (( Z τ x u − t ) ≤ t ≤ τ x u ) H (Φ( τ x u ) , n ( τ x u )) (cid:17) = X (˜ n e ) ∈ N E Z d ˜Φ H ( ˜Φ , ˜ n ) ˜ E x , ˜Φ , ˜ n (cid:16) { ˜ X ˜ T = x , ˜ n e ( ˜ T ) ≥ ∀ e ∈ E } G (( ˜ Z t ) t ≤ ˇ T ) F ( ˜Φ( ˜ T ) , ˜ n ( ˜ T )) Y x ∈ V \{ x } ˜Φ x ˜Φ x ( ˜ T ) (cid:17) where the integral on the l.h.s. is on the set { (Φ x ) ∈ R V + , Φ x = 0 } with d Φ = Q x ∈ V \{ x } d Φ x √ π | V |− and the integral on the r.h.s. is on the set { ( ˜Φ x ) ∈ R V + , ˜Φ x = u } with d ˜Φ = Q x ∈ V \{ x } d ˜Φ x √ π | V |− Proof.
We start from the left-hand side, i.e. the process, ( X t , n e ( t )) ≤≤ τ x u . We define˜ X t = X τ u − t , ˜ n e ( t ) = n e ( τ u − t ) , and ˜Φ x = Φ x ( τ u ) , , ˜Φ x ( t ) = Φ x ( τ u − t ) , (The law of the processes such defined will later be identified with the law of the processes( ˜ X t , ˜Φ( t ) , ˜ n ( t )) defined at the beginning of step 2, cf (4.3) and (4.4)). We also set˜ K e ( t ) = K e ( τ u ) − K e ( t ) , which is also the number of crossings of the edge e by the process ˜ X , between time 0 and t .With these notations we clearly have˜Φ x ( t ) = q ˜Φ x − ℓ x ( t ) , where ˜ ℓ x ( t ) = R t { ˜ X u = x } du is the local time of ˜ X at time t , and˜ n e ( t ) = ˜ n e (0) + ( N e ( J e ( ˜Φ( t ))) − N e ( J e ( ˜Φ(0)))) − ˜ K e ( t ) . By time reversal, the law of ( ˜ X t ) ≤ s ≤ ˜ τ u is the same as the law of the Markov Jump process( X t ) ≤ t ≤ τ u , where ˜ τ u = inf { t ≥ , ˜ ℓ x ( t ) = u } . Hence, we see that up to the time ˜ T = inf { t ≥ , ∃ x ˜Φ x ( t ) = 0 } , the process ( ˜ X t , ( ˜Φ x ( t )) x ∈ V , (˜ n e ( t )) t ≤ ˜ T has the same law as the processdefined at the beginning of step 2.Then, following [9], we make the following change of variables conditionally on the processes( X t , ( N e ( t ))) ( R ∗ + ) V × N E ( R ∗ + ) V × N E ((Φ x ) , ( n e ) e ∈ E ) (( ˜Φ x ) , (˜ n e ) e ∈ E )which is bijective onto the set { ˜Φ x , ˜Φ x = √ u, ˇΦ x > p ℓ x ( τ x u ) ∀ x = x }× { (˜ n e ) , ˜ n e ≥ K e ( τ u ) + ( N e ( J e ( ˜Φ( τ u ))) − N e ( J e (Φ))) } (Note that we always have ˜Φ x = √ u .) The last conditions on ˜Φ and ˜ n e are equivalent to theconditions ˜ X ˜ T = x and ˜ n e ( ˜ T ) ≥
0. The Jacobian of the change of variable is given by Y x ∈ V \{ x } d Φ x = Y x ∈ V \{ x } ˇΦ x Φ x Y x ∈ V \{ x } d ˇΦ x . (cid:3) Step 3:
With the notations of Theorem 6, we consider the following expectation for g and h bounded measurable test functions E (cid:16) g (cid:16) ( X τ u − t , n e ( τ u − t )) ≤ t ≤ τ u (cid:17) h ( ϕ ( u ) ) (cid:17) (4.6)By definition, we have ϕ ( u ) = σ Φ( τ u ) , where ( σ x ) x ∈ V ∈ {± } V are random signs sampled uniformly independently on clusters inducedby { e ∈ E, n e ( τ u ) > } and conditioned on the fact that σ x = +1. Hence, we define for(Φ x ) ∈ R V + and ( n e ) ∈ N E H (Φ , n ) = 2 − C ( n )+1 X σ ≪ n h ( σ Φ) , (4.7)where σ ≪ n means that the signs ( σ x ) are constant on clusters of { e ∈ E, n e > } and suchthat σ x = +1. Hence, setting F (Φ , n ) = e − P x ∈ V W x (Φ x ) − P e ∈ E J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! C ( n e ) − ,G (( Z τ u − t ) t ≤ τ u ) = g (cid:16) ( X τ u − t , n e ( τ u − t )) t ≤ τ u (cid:17) , NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 17 using lemma 4.3 in the first equality and lemma 4.4 in the second equality, we deduce that(4.6) is equal to(4.8) E ( G (( Z τ u − t ) ≤ t ≤ τ u ) H (Φ( τ u ) , n ( τ u )))) = X ( n e ) ∈ N E Z d Φ F (Φ , n ) E x , Φ ,n ( G (( Z τ u − t ) t ≤ τ u ) H (Φ( τ u , n ( τ u )))) d Φ = X (˜ n e ) ∈ N E Z d ˜Φ H (cid:16) ˜Φ , ˜ n (cid:17) ˜ E x , ˜Φ , ˜ n (cid:16) { ˜ X ˜ T = x , ˜ n e ( ˜ T ) ≥ ∀ e ∈ E } F (cid:16) ˜Φ( ˜ T ) , ˜ n ( ˜ T ) (cid:17) G (cid:16) ( ˜ Z t ) t ≤ ˜ T (cid:17) Y x ∈ V \{ x } ˜Φ x ˜Φ x ( ˜ T ) (cid:17) with notations of Lemma 4.4.Let ˜ F t = σ { ˜ X s , s ≤ t } be the filtration generated by ˜ X . We define the ˜ F -adapted process˜ M t , defined up to time ˜ T by(4.9) ˜ M t = F ( ˜Φ( t ) , ˜ n ( t )) Q V \{ ˜ X t } ˜Φ x ( t ) { ˜ X t ∈C ( x , ˜ n ) } { ˜ n e ( t ) ≥ ∀ e ∈ E } = e − P x ∈ V W x (˜Φ x ( t )) − P e ∈ E J e (˜Φ( t )) (cid:16) Y e ∈ E (2 J e ( ˜Φ( t ))) ˜ n e ( t ) ˜ n e ( t )! (cid:17) C (˜ n e ( t )) − Q x ∈ V \{ ˜ X t } ˜Φ x ( t ) { ˜ X t ∈C ( x , ˜ n ( t )) , ˜ n e ( t ) ≥ ∀ e ∈ E } where C ( x , ˜ n ( t )) denotes the cluster of the origin x induced by the configuration C (˜ n ( t )).Note that at time t = ˜ T , we also have˜ M ˜ T = F ( ˜Φ( ˜ T ) , ˜ n ( ˜ T )) Q V \{ ˜ x } ˜Φ x ( ˜ T ) { ˜ X ˜ T = x ˜ n e ( t ) ≥ ∀ e ∈ E } (4.10)since ˜ M ˜ T vanishes on the event where { ˜ X ˜ T = x } , with x = x . Indeed, if ˜ X ˜ T = x = x , then˜Φ x ( ˜ T ) = 0 and J e ( ˜Φ( ˜ T )) = 0 for e ∈ E such that x ∈ e . It means that ˜ M ˜ T is equal to 0 if˜ n e ( ˜ T ) > e neighboring x . Thus, ˜ M ˜ T is null unless { x } is a cluster in C (˜ n ( ˜ T )).Hence, ˜ M ˜ T = 0 if x = x since ˜ M ˜ T contains the indicator of the event that ˜ X ˜ T and x are inthe same cluster.Hence, using identities (4.8) and (4.10) we deduce that (4.6) is equal to(4 .
6) = X (˜ n e ) ∈ N E Z d ˜Φ H (cid:16) ˜Φ , ˜ n (cid:17) F (cid:16) ˜Φ , ˜ n (cid:17) ˜ E x , ˜Φ , ˜ n ˜ M ˜ T ˜ M G (cid:16) ( ˜ Z t ) t ≤ ˜ T (cid:17)! (4.11) Step 4 :
We denote by ˇ Z t = ( ˇ X t , ˇΦ t , ˇ n ( t )) the process defined in section 3.1, which is welldefined up to stopping time ˇ T , and ˇ Z Tt = ˇ Z t ∧ ˇ T . We denote by ˇ E x , ˇΦ , ˇ n the law of the processˇ Z conditionnally on the initial value ˇ n (0), i.e. conditionally on ( N e (2 J ( ˇΦ))) = (ˇ n e ). The laststep of the proof goes through the following lemma. Lemma 4.5. i) Under ˇ E x , ˇΦ , ˇ n , ˇ X ends at ˇ X ˇ T = x a.s. and ˇ n e ( ˇ T ) ≥ for all e ∈ E .ii) Let ˜ P ≤ tx , ˜Φ , ˜ n and ˇ P ≤ tx , ˇΦ , ˇ n be the law of the process ( ˜ Z Ts ) s ≤ t and ( ˇ Z Ts ) s ≤ t , then d ˇ P ≤ tx , ˜Φ , ˜ n d ˜ P ≤ tx , ˜Φ , ˇ n = ˜ M t ∧ ˜ T ˜ M . Using this lemma we obtain that in the right-hand side of (4.11)˜ E x , ˜Φ , ˜ n ˜ M ˜ T ˜ M G (cid:16) ( ˜ Z t ) t ≤ ˜ T (cid:17)! = ˇ E x , ˜Φ , ˜ n (cid:16) G (cid:16) ( ˇ Z t ) t ≤ ˇ T (cid:17)(cid:17) Hence, we deduce, using formula (4.7) and proceeding as in lemma 4.3, that (4.6) is equal to Z R V \{ x } d ˜ ϕe − E ( ˜ ϕ ) h ( ˜ ϕ ) X (˜ n e ) ≪ ( ˜ ϕ x ) Y e ∈ E, ˜ ϕ e − ˜ ϕ e + ≥ e − J e ( | ˜ ϕ | ) (2 J e ( | ˜ ϕ | )) ˜ n e ˜ n e ! ˜ E x , | ˜ ϕ | , ˜ n ˜ M ˜ T ˜ M G (cid:16) ( ˜ Z t ) t ≤ ˜ T (cid:17)! , where the last integral is on the set { ( ˜ ϕ x ) ∈ R V , ϕ x = u } , d ˜ ϕ = Q x ∈ V \{ x } d ˜ ϕ x √ π | V |− , and where( n e ) ≪ ( ϕ x ) means that (˜ n e ) ∈ N E and ˜ n e = 0 if ˜ ϕ e − ˜ ϕ e + ≤
0. Finally, we conclude that E (cid:20) g (cid:18)(cid:16) X τ x u − t , n e ( τ x u − t ) (cid:17) ≤ t ≤ τ x u (cid:19) h ( ϕ ( u ) ) (cid:21) = E h g (cid:16)(cid:0) ˇ X t , ˇ n e ( t ) (cid:1) ≤ t ≤ ˇ T (cid:17) h ( ˇ ϕ ) i where in the right-hand side ˇ ϕ ∼ P { x } , √ uϕ is a GFF and ( ˇ X t , ˇ n ( t )) is the process defined insection 3.1 from the GFF ˇ ϕ . This exactly means that ϕ ( u ) ∼ P { x } , √ uϕ and that L (cid:18)(cid:16) X τ x u − t , n e ( τ x u − t ) (cid:17) ≤ t ≤ τ x u (cid:12)(cid:12)(cid:12) ϕ ( u ) = ˇ ϕ (cid:19) = L (cid:16)(cid:0) ˇ X t , ˇ n ( t ) (cid:1) t ≤ ˇ T (cid:17) . This concludes the proof of Theorem 6. (cid:3)
Proof of lemma 4.5.
The generator of the process ˜ Z t defined in (4.5) is given, for any boundedand C for the second component test function f , by( ˜ Lf )( x, ˜Φ , ˜ n ) = − x ( ∂∂ ˜Φ x f )( x, ˜Φ , ˜ n )+ X y, y ∼ x W x,y (cid:16) f ( y, ˜Φ , ˜ n − δ { x,y } ) − f ( x, ˜Φ , n ) (cid:17) + W x,y ˜Φ y ˜Φ x (cid:16) f ( x, ˜Φ , n − δ { x,y } ) − f ( x, ˜Φ , n ) (cid:17)! . (4.12)where n − δ { x,y } is the value obtained by removing 1 from n at edge { x, y } . Indeed, since˜Φ x ( t ) = q ˜Φ x (0) − ℓ x ( t ), we have ∂∂t ˜Φ x ( t ) = − { ˜ X t = x } x ( t ) , (4.13)which is explains the first term in the expression. The second term is obvious from the definitionof ˜ Z t , and corresponding to the term induced by jumps of the Markov process ˜ X t . The lastterm corresponds to the decrease of ˜ n due to the increase in the process ˜ N e ( ˜Φ) − ˜ N e ( ˜Φ( t )).Indeed, on the interval [ t, t + dt ], the probability that ˜ N e ( ˜Φ( t )) − ˜ N e ( ˜Φ( t + dt )) is equal to 1 isof order − ∂∂t ˜ N e ( ˜Φ( t )) dt = { ˜ X t ∈ e } W e ˜Φ e ( t ) ˜Φ e ( t )Φ ˜ X t ( t ) dt using identity (4.13). NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 19
Let ˇ L be the generator of the Markov jump process ˇ Z t = ( ˇ X t , ( ˇΦ x ( t )) , (ˇ n e ( t ))). We havethat the generator is equal, for any smooth test function f , to( ˇ Lf )( x, Φ , n ) = − x ( ∂∂ Φ x f )( x, Φ , n ) +12 X y, y ∼ x n x,y Φ x A ( x,y ) (cid:16) f ( y, ˜Φ , n − δ { x,y } ) + f ( x, ˜Φ , n − δ { x,y } ) − f ( x, ˜Φ , n ) (cid:17) + X y, y ∼ x n x,y Φ x A ( x,y ) (cid:16) f ( y, ˜Φ , n − δ { x,y } ) − f ( x, ˜Φ , n )) (cid:17) + X y, y ∼ x n x,y Φ x A ( x,y ) (cid:16) f ( x, ˜Φ , n − δ { x,y } ) − f ( x, ˜Φ , n ) (cid:17) where A i ( x, y ) correspond to the following disjoint events • A ( x, y ) if the numbers of connected clusters induced by n − δ { x,y } is the same as thatof ˇ n . • A ( x, y ) if a new cluster is created in n − δ { x,y } compared with ˇ n and if y is in theconnected component of x in the cluster induced by n − δ { x,y } . • A ( x, y ) if a new cluster is created in n − δ { x,y } compared with n and if x is in theconnected component of x in the cluster induced by n − δ { x,y } .Indeed, conditionally on the value of ˇ n e ( t ) = N e (2 J e ( ˇΦ( t ))) at time t , the point process N e on the interval [0 , J e ( ˇΦ( t )))] has the law of n e ( t ) independent points with uniform distri-bution on [0 , J e ( ˇΦ( t )))]. Hence, the probability that a point lies in the interval [2 J e ( ˇΦ( t + dt ))) , J e ( ˇΦ( t )))] is of order − ˇ n e ( t ) 1 J e ( ˇΦ( t ))) ∂∂t J e ( ˇΦ( t ))) dt = { X t ∈ e } ˇ n e ( t ) 1ˇΦ X t ( t ) dt. We define the functionΘ( x, (Φ x ) , ( n e )) = e − P x ∈ V W x (Φ x ) − P e ∈ E J e (Φ) Y e ∈ E (2 J e (Φ)) n e n e ! ! C ( n e ) − Q V \{ x } Φ { x ∈C ( x ,n ) ,n e ≥ ∀ e ∈ E } , so that ˜ M t ∧ ˜ T = Θ( ˜ Z t ∧ ˜ T ) . To prove the lemma it is sufficient to prove ([1], Chapter 11) that for any bounded smoothtest function f
1Θ ˜ L (Θ f ) = ˇ L ( f )(4.14)Let us first consider the first term in (4.12). Direct computation gives (cid:18)
1Θ 1Φ x (cid:18) ∂∂ Φ x Θ (cid:19)(cid:19) ( x, Φ , n ) = − W x + X y ∼ x (cid:18) − W x,y Φ y Φ x + n x,y x (cid:19) . For the second part, remark that the indicators { x ∈C ( x ,n ) } and { n e ≥ ∀ e ∈ E } imply thatΘ( y, Φ , n − δ x,y ) vanishes if n x,y = 0 or if y
6∈ C ( x , n − δ x,y ). By inspection of the expression of Θ, we obtain for x ∼ y ,Θ( y, Φ , n − δ x,y ) = (cid:18) { n x,y > } ( A + 2 A ) n x,y J x,y (Φ) Φ y Φ x (cid:19) Θ( x, Φ , n )= (cid:18) ( A + 2 A ) n x,y W x,y x (cid:19) Θ( x, Φ , n ) . Similarly, for x ∼ y ,Θ( x, Φ , n − δ x,y ) = (cid:18) { n x,y > } ( A + 2 A ) n x,y J x,y (cid:19) Θ( x, Φ , n )= (cid:18) ( A + 2 A ) n x,y W x,y Φ x Φ y (cid:19) Θ( x, Φ , n ) . Combining these three identities with the expression (4.12) we deduce1Θ ˜ L (Θ f ) ( x, Φ , n ) = − x ∂∂ Φ x f ( x, Φ , n ) − X y ∼ x (cid:18) n x,y x (cid:19) f ( x, Φ , n )+ X y ∼ x ( A + 2 A ) n x,y x f ( y, n − δ { x,y } , Φ) + X y ∼ x ( A + 2 A ) 12Φ x f ( x, n − δ { x,y } , Φ) . It exactly coincides with the expression for ˇ L since 1 = A + A + A . (cid:3) General case.Proposition 4.6.
The conclusion of Theorem 3 still holds if the graph G = ( V, E ) is finiteand the killing measure is non-zero ( κ ).Proof. Let h be the function on V defined as h ( x ) = P x ( X hits x before ζ ) . By definition h ( x ) = 1. Moreover, for all x ∈ V \ { x } , − κ x h ( x ) + X y ∼ x W x,y ( h ( y ) − h ( x )) = 0 . Define the conductances W hx,y := W x,y h ( x ) h ( y ), and the corresponding jump process X h , andthe GFF ϕ (0) h and ϕ ( u ) h with conditions 0 respectively √ u at x . The Theorem 3 holds for thegraph G with conductances ( W he ) e ∈ E and with zero killing measure. But the process ( X ht ) t ≤ τ x u has the same law as the process ( X s ) s ≤ τ x u , conditioned on τ x u < ζ , after the change of time dt = h ( X s ) − ds. This means in particular that for the occupation times,(4.15) ℓ x ( t ) = h ( X s ) − ℓ x ( s ) . Moreover, we have the equalities in law ϕ (0) h law = h − ϕ (0) , ϕ ( u ) h law = h − ϕ ( u ) . NVERTING THE COUPLING OF THE SIGNED GAUSSSIAN FREE FIELD WITH A LOOP SOUP 21
Indeed, at the level of energy functions, we have: E ( hf, hf ) = X x ∈ V κ x h ( x ) f ( x ) + X e W e ( h ( e + ) f ( e + ) − h ( e − ) f ( e − )) = X x ∈ V [ κ x h ( x ) f ( x ) + X y ∼ x W x,y h ( y ) f ( y )( h ( y ) f ( y ) − h ( x ) f ( x ))]= X x ∈ V [ κ x h ( x ) f ( x ) − X y ∼ x W x,y ( h ( y ) − h ( x )) h ( x ) f ( x ) ] − X x ∈ Vy ∼ x W x,y h ( x ) h ( y )( f ( y ) − f ( x )) f ( x )= [ κ x − X y ∼ x W x ,y ( h ( y ) − f ( x ) + X e W he ( h ( e + ) f ( e + ) − h ( e − ) f ( e − )) = Cste( f ( x )) + E h ( f, f ) , where Cste( f ( x )) means that this term does not depend of f once the value of the functionat x fixed.Let ˇ X ht be the inverse process for the conductances ( W h ) e ∈ E e and the initial condition forthe field ϕ ( u ) h , given by Theorem 3. By applying the time change 4.15 to the process ˇ X ht , weobtain an inverse process for the conductances W e and the field ϕ ( u ) . (cid:3) Proposition 4.7.
Assume that the graph G = ( V, E ) is infinite. The killing measure κ maybe non-zero. Then the conclusion of Theorem 3 holds.Proof. Consider an increasing sequence of connected sub-graphs G i = ( V i , E i ) of G whichconverges to the whole graph. We assume that V contains x . Let G ∗ i = ( V ∗ i , E ∗ i ) be the graphobtained by adding to G i an abstract vertex x ∗ , and for every edge { x, y } , where x ∈ V i and y ∈ V \ V i , adding an edge { x, x ∗ } , with the equality of conductances W x,x ∗ = W x,y . ( X i,t ) t ≥ will denote the Markov jump process on G ∗ i , started from x . Let ζ i be the first hitting timeof x ∗ or the first killing time by the measure κ V i . Let ϕ (0) i , ϕ ( u ) i will denote the GFFs on G ∗ i with condition 0 respectively √ u at x , with condition 0 at x ∗ , and taking in account thepossible killing measure κ V i . The limits in law of ϕ (0) i respectively ϕ ( u ) i are ϕ (0) respectively ϕ ( u ) .We consider the process ( ˆ X i,t , (ˇ n i,e ( t )) e ∈ E ∗ i ) ≤ t ≤ ˇ T i be the inverse process on G ∗ i , with initialfield ϕ ( u ) i . ( X i,t ) t ≤ τ x i,u , conditional on τ x i,u , has the same law as ( ˇ X i, ˇ T i − t ) t ≤ ˇ T i . Taking the limitin law as i tends to infinity, we conclude that ( X t ) t ≤ τ x u , conditional on τ x u < + ∞ , has thesame law as ( ˇ X ˇ T − t ) t ≤ ˇ T on the infinite graph G . The same for the clusters. In particular, P ( ˇ T ≤ t, ˇ X [0 , ˇ T ] stays in V j ) = lim i → + ∞ P ( ˇ T i ≤ t, ˇ X i, [0 , ˇ T i ] stays in V j )= lim i → + ∞ P ( τ x i,u ≤ t, X i, [0 ,τ x i,u ] stays in V j | τ x i,u < ζ i ) = P ( τ x u ≤ t, X [0 ,τ x u ] stays in V j | τ x u < ζ ) , where in the first two probabilities we also average by the values of the free fields. Hence P ( ˇ T = + ∞ or ˇ X ˇ T = x ) = 1 − lim t → + ∞ j → + ∞ P ( τ x u ≤ t, X [0 ,τ x u ] stays in V j | τ x u < ζ ) = 0 . (cid:3) Acknowledgements
TL acknowledges the support of Dr. Max R¨ossler, the Walter Haefner Foundation and theETH Zurich Foundation.
References [1] K.L. Chung and J.B. Walsh.
Markov Processes, Brownian Motion, and Time Symmetry , volume 249 of
Grundlehren der mathematischen Wissenschaften . 2005.[2] Nathalie Eisenbaum, Haya Kaspi, Michael B. Marcus, Jay Rosen, and Zhan Shi. A Ray-Knight theoremfor symmetric Markov processes.
Ann. Probab. , 28(4):1781–1796, 2000.[3] Y. Le Jan. Markov paths, loops and fields. In , volume 2026.Springer, 2011.[4] Yves Le Jan. Markov loops, free field and Eulerian networks.
J. Math. Soc. Japan , 67(4):1671–1680, 2015.[5] T. Lupu. Convergence of the two-dimensional random walk loop soup clusters to CLE. arXiv:1502.06827,2015.[6] T. Lupu. From loop clusters and random interlacements to the free field.
Ann. Probab. , 44(3):2117–2146,2016.[7] Titus Lupu and Wendelin Werner. A note on Ising random currents, Ising-FK, loop-soups and the Gaussianfree field.
Electron. Commun. Probab. , 21:Paper No. 13, 7, 2016.[8] M.B. Marcus and J. Rosen.
Markov processes, Gaussian processes and local times , volume 100 of
CambridgeStud. Adv. Math.
Cambridge University Press, 1st edition, 2006.[9] C. Sabot and P. Tarr`es. Inverting Ray-Knight identity.
Probab. Theory Related Fields , 165(3):559–580,2015.[10] A.S. Sznitman.
Topics in occupation times and Gaussian free field . Zurich lectures in advanced mathematics.European Mathemtical Society, 2012.[11] W. Werner. On the spatial Markov property of soups of unoriented and oriented loops. In
S´eminaire deProbabilit´es XLVIII , volume 2168 of
Lecture Notes in Mathematics , pages 481–503. Springer, 2016.
Institute for Theoretical Studies, ETH Z¨urich, Clausiusstr. 47, 8092 Z¨urich, Switzerland
E-mail address : [email protected] Institut Camille Jordan, Universit´e Lyon 1, 43 bd. du 11 nov. 1918, 69622 Villeurbanne cedex,France
E-mail address : [email protected] CNRS and Universit´e Paris-Dauphine, PSL Research University, Ceremade, UMR 7534, Placedu Mar´echal de Lattre de Tassigny, 75775 Paris cedex 16, France, and, Courant Institute ofMathematical Sciences, New York, NYU-ECNU, Institute of Mathematical Sciences at NYUShanghai
E-mail address ::