On the spatial Markov property of soups of unoriented and oriented loops
OON THE SPATIAL MARKOV PROPERTY OF SOUPS OF UNORIENTEDAND ORIENTED LOOPS
WENDELIN WERNER
Abstract.
We describe simple properties of some soups of unoriented
Markov loops and of somesoups of oriented
Markov loops that can be interpreted as a spatial Markov property of these loop-soups. This property of the latter soup is related to well-known features of the uniform spanningtrees (such as Wilson’s algorithm) while the Markov property of the former soup is related to theGaussian Free Field and to identities used in the foundational papers of Symanzik, Nelson, and ofBrydges, Fr¨ohlich and Spencer or Dynkin, or more recently by Le Jan. Introduction
Symanzik and then Nelson have pioneered the study of Euclidean field theory more than fortyyears ago [17, 12]. In their approach, measures on random paths and loops play an important roleand led to further important developments such as in the work of Brydges, Fr¨ohlich and Spencer[1] (see also Dynkin [4, 5]). In all these papers, a gas of closed loops is used to represent partitionfunctions and correlation structures of random fields.The present note will be in the same spirit, but the focus will be on this random gas of loopsitself as the main object of interest, rather than viewing it as a combinatorial diagrammatic toolto evaluate quantities related to fields. We will in particular focus on the role of orientation ofloops and describe a particular simple property of such random configurations of unoriented loopsas well as for random configurations of oriented loops. These properties are very directly relatedto the combinatorial features used in the aforementioned papers as well as to some features in themore recent study by Le Jan [9], who was also focusing more on properties of the occupation timesof these soups. In particular, some of the observations in Sections 7 and 9 in [9] can be viewed asdescribing some of the features that we will try to highlight.These gases of loops, or loop-soups (as they have been called in [8]) are a random Poissonian (i.e.non-interacting) collection of random unrooted loops in a domain, that can be associated naturallyto a Markov process or a discrete-time Markov chain (see [9] and the references therein). Whenone discovers the configurations of the loop-soup within a given sub-domain U of the entire domainin which the soup is defined, one observes on the one hand loops that are entirely contained in U (which form a loop-soup in U ), and on the other hand, excursions in U that are parts of loopsthat do not entirely stay in U . Note that different such excursions can belong to the same loop ornot, depending on the configuration outside of U . The Markovian property that we shall discussbasically describes how to randomly complete the missing pieces into the loops i.e. it describes theconditional distribution of the loop-soup outside of U when conditioning on these excursions of theloop-soup in U . As we shall see, this takes a nice “Markovian form” in two special cases: • When one considers the loops to be oriented, and the intensity of the loop-soup to be theone that relates it to the partition function of uniform spanning trees ie. to the number ofspanning trees (and to Wilson’s algorithm [20] to generate them uniformly at random, seee.g. [20, 6, 7, 19]). • In the case where the chain is reversible, if one considers the loops to be unoriented, andchooses the intensity to be the one that relates the loop-soup to the Gaussian Free Field (for a r X i v : . [ m a t h . P R ] N ov nstance via their partition functions – and in fact the occupation time of a continuous-timeversion of the loop-soup then corresponds exactly to the square of the GFF, see [9]).In those two cases, the only relevant information in order to complete the excursions in U intoloops is the family of all endpoints of the excursions on ∂U , and not how these endpoints areconnected by the excursions within U (nor which excursion end-point is connected to which otherby an excursion). In other words, the trace of the discrete loop-soup inside U and outside of U areconditionally independent given their trace on ∂U (more precisely, given their trace on the edgesbetween U and the complement of U ).Let us illustrate another instance of the spatial Markov property in an impressionistic and heuris-tic way via the following figures. We consider a loop-soup of unoriented loops in the inside of therectangle, of well-chosen intensity (related to the partition function of the GFF). In this loop-soup,only finitely many loops do touch the two circles, and in each such loop, there are an even numberof “crossings” from one circle to the other. The statement in the caption of Figure 2 is the type ofresult that we will derive. Figure 1.
The unoriented loop(s) in the soup that touch both circles, and theendpoints of their (four in this case) crossings between the two circles
Figure 2.
Conditionally on the set of endpoints of crossings on each of the twocircles, these three pictures, corresponding to different parts of the loops that touchboth circles, are independent.To conclude this introduction, let us briefly mention that of the motivations for the present workis to explore the relation between the natural “Markovian” structures emerging from the loop-soups with the theory of local sets for the discrete and continuous GFF, as defined by Schrammand Sheffield in [15]. . Background and definitions
In this section, we recall standard facts about Markov loops and loup-soups, make some elemen-tary comments about the orientation/non-orientation of loops, and we define the natural measureson Markov bridges that we will need.2.1.
The measure on unrooted oriented loops.
Let us consider a discrete oriented graph Γ,where each vertex x has a finite number d ( x ) of outgoing edges, so that it is possible to definesimple random walk on Γ ( d ( x ) is however not necessarily the same for all x ). Note that therecould be “several” parallel edges from a vertex x to a vertex y . Also, as opposed to the unorientedcase, the could be an edge from x to y but no edge from y to x .We say that l = ( l , e , l , l , . . . , l n − , e n − , l n ) for n ≥ n = | l | steps in Γif l , l , l n − are sites of the graph, if l = l n and if for all i ∈ { , . . . , n } , e i denotes an edge from l i − to l i in the graph. Let us notice that in the case of parallel edges in the graph, the informationabout which oriented edges were used are part of the information contained in the loop.We can note that the probability p ( l ) that a random walk starting from l follows exactly thisloop during its first n steps is exactly 1 / (cid:81) n − i =0 d ( l i ). We define the measure ρ on rooted loops by ρ ( l ) = p ( l ) /n . Note that this is not a probability measure (a loop l might for instance containanother loop as its first steps if it visits l several times before time n ; furthermore, we sum overall possible starting points l in the graph).The quantity ρ ( l ) remains unchanged if one changes the root of the loop (if one considers theloop ( l i , e i +1 , l i +1 , . . . , l n , e , l , . . . , l i ) instead of l ), which leads naturally to the definition of anunrooted loop L as an equivalence class of rooted loops, where two loops are equivalent as soonas they are obtained from one another by rerooting. The measure µ on unrooted oriented loopsis then the image of the measure ρ under the mapping that maps each rooted oriented loop toits equivalence class of unrooted loops. This is the loop-measure that has been used and studiedextensively in recent years, in connection with loop-erased random walks, Gaussian Free Fields,Dynkin’s isomorphism theorems and in the continuous two-dimensional (Brownian) setting, withconformal loop ensembles and SLE curves, see e.g. [9, 19] and the references therein).In many cases, the number of different rooted loops in the same equivalence class of unrootedloops is the length n ( l ) = | l | of the loop (one possible root per step on the loop). However, whena loop l consist exactly of the concatenation of J ≥ n = J n and l is exactly the concatenation of J copies of ( l , . . . , l n ) (and J = J ( l ) is the maximal suchnumber – note that this number is also invariant under rerooting of l so that we can view it as afunction of L ), then the number of rooted loops that give rise to the same unrooted loop as l is n/J ( L ). Hence, the general formula for µ is µ ( L ) = p ( l ) /J ( l ), when l is any loop in the equivalenceclass L .In the sequel, we will refer to loops l = ( l , e , . . . , l n ) (or their equivalence class) such that J ( l ) = 1 as single loops, and we say that the loop l k defined as the concatenation of k copies of l ie. as ( l , e , . . . , l n − , l , e , . . . , l n − , . . . , l n − , e n , l ) with J ( l k ) = k is its k -fold multiple.2.2. The measure on unrooted unoriented loops.
In the previous subsection, the graph wasoriented, and all our loops (rooted and unrooted) were oriented. Let us now consider an unorientedgraph, where each vertex x has a finite number d ( x ) of outgoing edges (here a single edge from x to x would be counted twice, and we also allow parallel edges between two sites x and y ). Then theprevious quantity p ( l ) remains unchanged when one changes the orientation of the loop; indeed, ifone defines the time-reversal ˆ l := ( l n , e n , l n − , . . . , l , e , l ), then p ( l ) = 1 / (cid:81) ni =1 d ( l i ) = p (ˆ l ).We now define an unrooted unoriented loop as the equivalence class of oriented rooted loops,where two such loops are said to be equivalent as soon as they are obtained from one another by erooting and possibly by time-reversal. Or alternatively, we say that an unrooted unoriented loopis the equivalence class of unrooted oriented loops, modulo time-reversal.We then define the measure ν on unrooted unoriented loops to be the image of ρ/ ν is of course just the unoriented projection of µ/ l of a rooted oriented loop l is not in the same unrooted oriented classof loops as l , then there will be twice more rooted oriented loops in the same class ˜ L of unorientedunrooted loops of l than in its class L of oriented unrooted loops, so that ν ( ˜ L ) = µ ( L ). It howevercan happen that l and ˆ l define the same oriented unrooted loop L (for instance when the loop l is the concatenation of a loop with its time-reversal). In that case, ν ( ˜ L ) = µ ( L ) /
2. We define˜ J ( ˜ L ) = J ( L ) or 2 J ( L ) depending on whether L (cid:54) = ˆ L or not, so that ν ( ˜ L ) = p ( l ) / ˜ J ( ˜ L ) for all ˜ L .All the previous definitions have also straightforward counterparts and generalizations for generalMarkov processes (not necessarily random walks) – the processes would need to be reversible forthe unoriented loops –, and in continuous time and/or in continuous space. Note that as soon asone deals with continuous time, the multiplicity issues (raised by the fact that J is not constant)do not exist. One fundamental example is of course the Brownian loop measure that gives rise tothe loop-soup, as introduced in [8]. Other examples include the Brownian loops on cables systemsassociated to discrete graphs, as studied in [10].Since our purpose here is to give an elementary presentation of the resampling property of loop-soups, we have opted in the present paper to state and explain things in the most transparentsettings (random walk loops on regular graphs, where all points in Γ have the same number g ofoutgoing edges – which we will from now on assume –, and Brownian loops). The generalization ofthe proofs to continuous-time and discrete space Markov processes do not require any new idea.2.3. Loop-soups.
For a given graph, one can define simple natural random objects out of themeasures on loops. For each α >
0, one can define a Poisson point process of loops, with intensitygiven by α times the measure µ on loops. This is the loop-soup, as introduced in the Browniansetting in [8] and studied more recently in the discrete setting in [9]. It is also the gas of loops thatwas already used in [17, 1].Of course, when one samples a soup of (unrooted) oriented loops according to the loop measure αµ , and one forgets about the orientation of the loops, one gets a soup of unrooted unorientedloops with intensity 2 αν , and conversely, one can recover the former by choosing at random theorientation of each loop. In order to avoid confusions, we will use the letters α to denote theintensity of soups of oriented loops (i.e. with intensity measure αµ ) and c to denote the intensityof soups of unoriented loops (i.e. with intensity measure cν ). The natural relation between c and α is then c = 2 α .We will not recall all the properties of these loop-soups, but we would like to stress the followingpoints: • The soup of oriented loops with intensity α = 1 is very closely related to uniform spanningtrees. In particular, the loops in such a loop-soups correspond exactly to the family of loopsthat have been erased when performing Wilson’s algorithm to sample a uniform spanningtree in Γ. And in this context, it is somewhat more natural to consider oriented loops. • The soup of unoriented loops with intensity c = 1 is very closely related to the GaussianFree Field in Γ and its square. In this context, because one looks only at the cumulatedoccupation times of the loops, it is in fact somewhat more natural to consider unorientedloops (as the orientation is not needed to define the occupation time measure). ith this notation, the UST is related to c = 2 and the GFF to c = 1, and more generally, in twodimensions, in the conformal field theory language, the value of c corresponds to the absolute valueof the central charge of the corresponding models.Suppose now that L , . . . , L k are k different oriented unrooted loops. Let U , . . . , U k denote therespective number of occurrences of these loops in an unrooted loop-soup with intensity αµ . Theseare k independent Poisson random variables with respective means αµ ( L ) , . . . , αµ ( L k ), so that P ( U = u , . . . , U k = u k ) = k (cid:89) j =1 (( αµ ( L j )) u j e − αµ ( L j ) /u j !) . In the special case where α = 1, the α u j terms disappear, and we get P ( U = u , . . . , U k = u k ) P ( U = . . . = U k = 0) = k (cid:89) j =1 ( p ( L j ) /J ( L j )) u j u j ! . Similarly, if we are considering instead a loop-soup of unoriented loops with intensity ν (ie. for c = 1), the very same formula holds, i.e. if ˜ L , . . . , ˜ L k are k different unoriented loops, and if˜ U , . . . , ˜ U k denote the respective number of occurrences of these loops in a soup of unrooted loopswith intensity ν , then P ( ˜ U = u , . . . , ˜ U k = u k ) P ( ˜ U = . . . = ˜ U k = 0) = k (cid:89) j =1 ( p ( L j ) / ˜ J ( ˜ L j )) u j u j ! . Random bridges.
Recall that in order to slightly simplify notations and some of our consid-erations, we are from now going to assume that (both in the oriented and in the unoriented cases),the graph Γ will be such that each site has the same number g of outgoing edges. Note that this isnot really a restriction, because it is for instance always possible starting from an unoriented graphΓ where each site x has d ( x ) outgoing edges, with sup x d ( x ) ≤ g , to add ( g − d ( x )) stationary edgesfrom x to x to the graph, without changing the behavior of the random walks (and this leads tothe natural way to extend the results to the case of graphs with non-constant degree).Let us first suppose that Γ is an oriented graph. Consider now a subgraph D ⊂ Γ and two points x and y in D . We say that a bridge b from x to y in D is a finite nearest-neighbour path (keepingtrack of the oriented edges used) in D starting at x and finishing at y . We call n ( b ) the length(number of jumps) of b . A bridge from x to x is allowed to have a zero length.Suppose now that the Green’s function G D ( x, y ) is positive and finite. Recall that this is themean number of visits at y before exiting D , by a random walk starting at x . In other words, it isthe sum over all bridges from x to y in D of g − n ( b ) . We can therefore define a probability measureon bridges from x to y in D , that assigns a probability g − n ( b ) /G D ( x, y ) to each bridge b .Suppose now that we are given N points x , . . . , x N and N points y , . . . , y N in D . We say that athe family of paths b , . . . , b N is an ordered bridge in D from X = ( x , . . . , x N ) onto Y = ( y , . . . , y N )if each b j is a bridge from x j to y j in D . We also define G D ( X, Y ) = (cid:81) Nj =1 G D ( x j , y j ) and whenthis quantity is not equal to zero nor infinite, we define the probability measure on ordered bridgesfrom X to Y in D to be obtained by taking N independent bridges from x j to y j respectively.An unordered bridge from X to Y is defined to be the knowledge of a permutation s from { , . . . N } and of an ordered bridge from X to Y s = ( y s (1) , . . . , y s ( N ) ). We now define define aprobability measure B DX,Y on unordered bridges from X to Y in D as follows:(1) First sample a permutation σ so that the probability of σ = s is proportional to G D ( X, Y s ).(2) Then, conditionally on σ , sample the ordered bridge from X to Y σ according to the prob-ability measure on ordered bridges in D described above. or this to make sense, we need that for at least one s , G D ( X, Y s ) >
0. This procedure basicallysamples an unordered bridge from X to Y in such a way that the probability of a given unorderedbridge is proportional to g − K , where K denote the sum of the length of the N bridges that formthe generalized bridge. Mind that in the present setting, when y = y say, we do count the samecollection of N bridges (corresponding to interchanging y and y ) twice in our partition function,because they correspond to different permutations.Let us now suppose that the graph Γ is not oriented. In the previous definition, each bridgehas an implicit orientation (from x to y ). On the other hand, the image under time-reversal (i.e.consider ˆ b j = b n − j ) of the bridge probability from x to y in D is exactly the bridge probabilityfrom y to x in D (note that we use here the fact that x and y have the same number of outgoingedges g ). One can therefore define the probability measure on unoriented bridges in D joining x and y to be the law obtained by considering B Dx,y and then forgetting about the time-orientation.Suppose now that Z = ( z , . . . , z N ) are 2 N points in D . An unoriented Z -bridge is the knowledgeof a pairing t of { , . . . , N } (this is a permutation that contains only cycles of length exactly 2 –and we say that i and t ( i ) are paired – we will denote the N pairs of t by ( t , t ) , . . . , ( t N , t N ) usingsome lexicographic rule), and of N unoriented bridges joining the N pairs ( z t k , z t k ) for k ≤ N .For each Z , we then define the measure B DZ on unoriented unordered Z -bridges as follows:(1) We first sample a pairing τ in such a way that the probability of a given pairing t isproportional to (cid:81) Nk =1 G D ( z t k , z t k ).(2) When τ = t , we then sample an N independent (unoriented) bridges in D joining the twopoints of each of the N pairs ( z t k , z t k ).Again, this only makes sense if for at least one pairing t , (cid:81) k G D ( z t k , z t k ) is positive. Then, thedefinition just means that we sample a Z -bridge in such a way that the probability of a given Z -bridge is just proportional to g − K where K denote the sum of the length of the N bridges thatform this Z -bridge.These definitions of bridges can be trivially extended to the Brownian settings (both in d -dimensional space as well as on cable systems), provided that no two z j ’s coincide (in the unorientedbridges) and that no x i is equal to an y j (for the oriented bridges) so that the Green’s functionsinvolved are all finite. The only difference is that the distribution of an individual bridge from x to y is done in two steps:(1) First, sample the time-length T of the Brownian bridge according to the probability measure p D,t ( x, y ) dt/G D ( x, y ), where p D,t ( x, y ) is the density at y of the law of a Brownian motionat time t , starting from x and killed upon exiting D .(2) Then, conditionally on T , sample a usual Brownian bridge from x to y and time-length T ,conditioned to stay in D .3. Partial resampling of soups, and spatial Markov properties
We now describe various instances of the partial resampling properties of loop-soups, and discusssome consequences.3.1.
Partial resampling of soups of oriented loops at α = 1 . Let us suppose that Γ is anoriented graph of degree g as before, and that D ⊂ Γ is a subgraph of Γ where the Green’s functionis finite. We are going to describe the resampling property of the soup of oriented loops withintensity α = 1. Suppose that F and F are two disjoint finite set of vertices in our graph. Whenone considers a loop-soup in D , then the number of loops in the loop-soup that do intersect both F and F is a Poisson random variable M = M ( F , F ) with finite mean equal to the µ -mass ofthe set of loops that intersect both F and F . We denote the family of M loops that intersect oth F and F by L (the information in L includes how many occurrences of any given orientedunrooted loop that intersects F and F there are). We will write L = ( L , . . . , L M ), where thechosen order of the loops in the family follows some lexicographic (deterministic) rule, so that theinformation provided by L and ( L , . . . , L M ) are identical.When L is an unrooted loop that intersects F and F , we can consider the finitely many portionsof L that are of the type ( a , e , a , a , . . . , a k ) with a , a k ∈ F , { a , . . . , a k − } ∩ F = ∅ and atleast one of the a i is in F . In other words, these are the excursions of L away from F that doreach F . We allow a = a k , or the excursion to be the entire loop (which happens if L visits F only once) and it can also happen that the same excursion occurs several times in the same loop.When we sample L , we call η the collection of all excursions of its loops. We can again decideto order them in some lexicographic predetermined deterministic way, so that we can write η =( η , . . . , η N ) (again, it is important that if a given piece appears several times in the loop-soup, thenit appears several times in this list as well). Note that N ≥ M because each loop that intersects F and F contains at least one such excursion. The pieces η , . . . , η N might be part of N differentloops (in which case N = M ), but they could also be all parts of the same loop (in which case M = 1). Of course, the probability that N = M = 0 is also positive.Observe that one intuitive way to discover all these excursions is in fact to explore all theloops “starting” from their intersection points with F , in both the positive time-direction and thenegative time-direction, until reaching F in both directions.Each of the pieces η j are naturally oriented as parts of oriented loops, and we can define theirrespective starting points Y j and endpoints X j (note that all these points are on F ). The missingparts of the loops that the η ’s are part of will therefore be bridges in the complement of F , thatjoin each of the X j ’s to a Y σ ( j ) for a permutation σ ie. the missing part will be an unordered bridge β from the vector X = ( X , . . . , X N ) to the vector Y = ( Y , . . . , Y N ) in D \ F . Now, the resamplingresult in this case goes as follows: Proposition 1.
The conditional distribution of β given η is exactly the unordered bridge measure B D \ F X , Y . Figure 3.
Discovering (i) the oriented excursions away from the right part thatreach the small square, (ii) sampling the three oriented bridges in the complementof the small square.Note that this conditional distribution is fully described by the vectors X and Y (ie. it dependson η just as a function of X and Y ), which is one of the main features of this result. In other words,conditionally on X and Y , η and β are independent. In particular, the number of actual loopsthat are being created by β when one concatenates it with η does not intervene in the conditionaldistribution, which is a specific feature of this α = 1 case. et us comment on the case where F = D \ F : If one then conditions on the number of jumpsof the loop-soup on each edges from a point in F to a point of F (one then gets a collection( X (cid:48) j , X j ) j ≤N of jumps from X (cid:48) j ∈ F to X j ∈ F ), and on the number of jumps of the loop-soupon each edge from a point of F to a point of F (one then gets a collection ( Y j , Y (cid:48) j ) j ≤N of jumpsfrom Y j ∈ F to Y (cid:48) j ∈ F ), then the conditional distribution of the missing pieces in F and in F are independent, and there are respectively the unordered bridge measure in F from X to Y (this corresponds to β ), and the unordered bridge measure from Y (cid:48) to X (cid:48) in F (this correspondsto η without the first and last jumps of each excursion). This can be interpreted as a spatialMarkov property of the occupation field on oriented edges (the random function that assigns toeach oriented edge the total number of jumps of the soup along this edge) of the α = 1 soup oforiented loops. We will discuss this again at the end of this section.In the same spirit, we can in fact “symmetrize” also Proposition 1 also when F is a subset of thecomplement of F . Let us then define the collection of crossings η → to be the parts of the loopsin the loop-soup of the type a , e , . . . , a n with a ∈ F , a n ∈ F and a , . . . , a n − ∈ D \ ( F ∪ F ).We also define η → similarly, and note that there are as many crossings from F to F as thereare crossings from F to F . Let X (resp. X (cid:48) ) denote the vector of endpoints of η → (resp. η → )and Y (resp. Y (cid:48) ) the vector of starting points of η → (resp. η → ). Then, we can note that X and Y are exactly the same as the ones defined in Proposition 1, while X (cid:48) and Y (cid:48) correspond to thosethat one obtains when interchanging F and F . Furthermore, η → and η → are fully determinedby η (or alternatively by the symmetric family η (cid:48) of excursions outside of F that do reach F ). Itfollows readily from Proposition 1 that: Proposition 2.
Conditionally on η → and on η → , the missing parts of the loops that they arepart of (these are the loops of the α = 1 soup of oriented loops that intersect both F and F ) aredescribed by two independent unordered bridges with conditional distributions B D \ F X , Y and B D \ F X (cid:48) , Y (cid:48) . Note that the other loops in the loop-soup (i.e. the loops that either do not intersect at leastone of the two sets F or F ) are just described by a loop-soup in the complement of F and aloop-soup in the complement of F , that are coupled to share exactly the same loops that stay in D \ ( F ∪ F ).Let us now prove Proposition 1. Proof.
Let us consider a family E of N excursions, such that P ( η = E ) > N excursions E , . . . , E N of E are all different. Then if η = E and ˜ L = L , all the loops in L are simple,and they do occur necessarily exactly once (and not more). Hence, for such an L , the probabilitythat L = L is proportional to g − n ( L ) where n is the sum of the lengths of the loops in L (and theproportionality constant does not depend on L ).On the other hand, if X and Y are the vector of end-points of E , the B D \ F X,Y -probability tosample a unordered bridge that gives rise exactly to L when concatenating it to E is proportionalto g − K (where K = n ( L ) − n ( E ) is the total length of the generalized bridge), because there isjust one permutation per bridge that works. It therefore follows immediately that conditionally on η = E , the distribution of the missing bridges is indeed B X , Y in D \ F .Instead of treating directly the case of multiple occurrences of the same excursions in η , we willuse the following trick (a similar idea can be used to show the fact that the loops erased duringWilson algorithm do correspond exactly to an oriented loop-soup, see for instance [19]). We choosea very large integer W (that is going to tend to infinity), and we decide to replace the graph Γ bythe graph Γ W , which is obtained by keeping the same set of vertices as Γ, but where each edge ofΓ is replaced by W copies of itself. In this way, each site has now gW outgoing edges instead of g .There is of course a straightforward relation between random walks, loops and bridges on Γ W and n Γ. For instance, a loop-soup (resp. bridge, resp. excursion) on Γ W is directly projected on aloop-soup (resp. bridge, resp. excursion) on Γ.Let us couple loop-soups with intensity α = 1 in all of the Γ W ’s on the same probability space,in such a way that the projections of the loop-soups in Γ W onto Γ (in the sense described above)are the same for all W ’s. We fix also F , F , and define (with obvious notation), L W , η W , L W etc.Note that the vectors of extremal points X and Y are then the same for all η W ’s.We can also note that the probability that some edge is used more than once in the loop-soupdoes tend to 0 as W → ∞ . The probability that all excursions in η W are different therefore tendsto 1 as W → ∞ .But conditionally on the fact that all excursions in η W are different (applying our previous resultto Γ W ), we know that the conditional distribution of L W \ η W given η W is the bridge probabilitymeasure from X to Y in D W \ F . Projecting this onto Γ, we get that the conditional distributionof β given η W (on the event that in η W , no two excursions are the same) is the unordered bridgemeasure B X , Y in D \ F .If U ( W ) is the event that no two excursions of η W appear twice, we therefore get that, condi-tionally on η = E and U ( W ), the conditional distribution of β is the unordered bridge measure B X , Y in D \ F . We now just let W → ∞ , which concludes the proof of the proposition. (cid:3) Partial resampling of soups of unoriented loops at c = 1 . Let us now come back tothe setting where the graph Γ is unoriented. When one considers a soup of unoriented loops withintensity ν (recall that this corresponds to c = 1 or α = 1 / µ/ F and F by L = ( ˜ L , . . . , ˜ L M ), the corresponding collectionof (unoriented) excursions by η = (˜ η , . . . , ˜ η N ) and the endpoints of these N excursions by Z =( Z , . . . , Z N ). The missing parts of the (unoriented) loops are unoriented paths that join each Z i to exactly one other Z j , so that β is an unordered Z -bridge in D \ F .Note again that it is intuitively possible to explore the excursions ˜ η j “starting” from their in-tersections with F in both directions, until hitting F (and in this way, one did yet discover themissing parts β ). Proposition 3.
The conditional distribution of β given η is exactly the unordered unoriented bridgemeasure B Z in D \ F . Figure 4.
Discovering (i) the unoriented excursions away from the right part thatreach the small square, (ii) sampling the three unoriented bridges in the complementof the small square. ust as in the oriented case, we stress that an important feature in this statement is that thisconditional distribution is a measurable function of the vector Z (the other information on theexcursions are not needed). We will further comment on this in the next subsection. Proof.
We will follow the same idea as in the proof of the oriented case. As in the unoriented case,when the N pieces ˜ E , . . . , ˜ E N of E are all different, the statement is almost immediate (for eachgood ordered bridge, only one pairing works in order to complete E into L , and the probability tocomplete these N pieces into L is therefore proportional to g − K where K is the difference betweenthe total number of jumps in the loop-configuration and in E ).We then use the same trick with copying each edge a large number of times. The very sameargument the works, almost word for word. (cid:3) Spatial Markov properties.
The particular case where F is the complement of F is alsoof interest for the soup of unoriented loops. Let us for instance describe how things work for theoccupation times of loop-soups (which is the main focus of the papers of Le Jan [9]). If one thenconditions on the numbers of jumps of the loop-soup on all edges between a point in F and apoint of F (in either direction – the loops being unoriented there is anyway no direction), then theconditional distribution of the parts β in F of the loops that intersect both F and F is describedby Proposition 3 and it is a unordered unoriented bridge in F (and it is in fact fully describedby the knowledge of the number of jumps along the edges between F and F , ie. this conditionaldistribution is a function of these number jumps of the edges between F and F ). But, the situationis symmetric and we can interchange the roles of F and F ; we therefore conclude that given β and the numbers of jumps along the edges between F and F , the conditional distribution of η (cid:48) defined to be the collection η where one has removed the two extremal jumps of each η j (these arethe jumps between F and F ), is that of an unordered unoriented bridge in F (and the law of thisbridge is also fully described by the number of jumps between F and F ).In other words, when one conditions on these number of jumps along the edges between F and F , we can enumerate these jumps (using some deterministic lexicographic rule) by ( Z (cid:48) j , Z j ) j ≤ N where Z (cid:48) j ∈ F and Z j ∈ F . Then, the conditional distribution of η (cid:48) and β are conditionallyindependent unordered bridges, respectively following the unordered bridge measures B D \ F Z (cid:48) and B D \ F Z . In particular, when adding on top of this the loop-soups in F and the loop-soups in F ,it follows that conditionally on the occupation times (i.e. on the number of jumps N e across eachedge) on the edges between F and F , the occupation times on sites and edges in F is independentof the occupation times on sites and edges in F . We can rephrase this property in the followingsentence: The occupation time field on edges of the soup of unoriented loops for c = 1 does satisfythe spatial Markov property.We can note that if U is a non-negative function of the occupation time field on the edgesof the form U (( N e )) = (cid:81) e u e ( N e ), such that the expectation of U (for the c = 1 loop-soup)is equal to one, then if we define the new probability measure Q on occupation times on edgesby dQ/dP (( N e )) = U (( N e )), then the spatial Markov property also holds for Q . This can beused to represent a modification of the Markov chain (ie. different walks with non-uniform jumpprobabilities).If we consider an unoriented graph, but that we interpret as an oriented graph (each unorientededge defines an oriented edge in each direction), on which we define an α = 1 soup of orientedloops, then we can also reformulate the results of Subsection 3.1 in a similar way. More precisely,for each edge, we can define the total number of jumps N ( e ) by the soup in one direction of e , and N ( e ), the number of jumps in the opposite direction. Then, if we define N e := (( N ( e ) , N ( e )),this two-component occupation time field on edges of the α = 1 soup of oriented loops satisfies thespatial Markov property in the same sense as above. et us now come back to the study of the loops themselves, and not just of the cumulatedoccupation time of the soup. As in the oriented case, we can also (when F is a subset of F )rephrase Proposition 3 in a more symmetric way, involving the crossings between F and F . Wedefine η ↔ the set of (unoriented) parts of loops in the c = 1 loop-soup that join a point of F toa point of F and otherwise stay in the complement of F ∪ F , and we denote by Z the vector ofendpoints of these crossings in F , and by Z (cid:48) the set of endpoints in F . Then: Proposition 4.
Conditionally on η ↔ , the missing parts of the unoriented loops that these cross-ings are part of (these are the loops in the loop-soup that intersect both F and F ) are described bytwo independent unordered unoriented bridges with respective conditional distributions B D \ F Z and B D \ F Z (cid:48) . Figure 5 that illustrates the corresponding result in the Brownian case, can also be used toillustrate this result.It is also easy to generalize Proposition 4 and Proposition 2 to more than two sets F and F (andhave instead n disjoint sets F , . . . , F n ). For instance, in the unoriented case, one then conditionson the set η ↔ of all crossings from any F i to any other F j that also stay in the complement of allthe other F k ’s. These crossings define n vectors Z , . . . , Z n (where Z j is a list of the even numberof endpoints on F j of the aforementioned crossings). Conditionally on η ↔ , the missing parts ofthe loops (that are the loops in the loop-soup that touch at least two different F j ’s) are describedby n conditionally independent unordered unoriented bridges with respective distributions B D (cid:48) ∪ F j Z j (where D = D \ ∪ i F i ) for j ≤ n .Such decompositions of the loops in the soup that intersect disjoint compact sets into crossings+ conditionally independent unordered bridges, can be immediately transcribed to the case ofBrownian loops on the cable system associated to this graph as studied in [10]; we leave this as asimple exercise to the reader. This is all of course closely related to the Markov property of theGaussian Free Field, as well as to Dynkin’s isomorphism theorem [4] via the relation between thesquare of the GFF and the loop-soup (see e.g.. [9] and the references therein for background).With such Markovian-type properties in hand, a natural next step is to define random sets thatplay the role of stopping times for one-dimensional Markov processes. In the setting of the discreteGFF, these are the local sets as defined in [15], and that turned out to be very useful concepts. Justas for one-dimensional stopping times, there are several possible ways to define them, dependingon what precise filtration on considers. In the present case (we do here describe the definitions inthe unoriented loop-soup for c = 1, but the oriented case would be almost identical), one can forinstance say that: • A random set of points F is a stopping set for the occupation time field filtration, if forany F , the event {F = F } is measurable with respect to the occupation time field on alledges adjacent to F . • A random set of points F is a stopping set for the loop-soup filtration, if for any F , the event {F = F } is measurable with respect to the trace of the loop-soup on all edges adjacent to F (i.e. it is measurable with respect to the set of loops that are fully contained in F andthe set of excursions η defined above, when F is the complement of F . • A random set of points F is a stopping set for the loop-soup, if for any F such that P ( F = F ) >
0, conditionally on the event {F = F } , the distribution of the loop-soupoutside of F consists of the union of an independent loop-soup in the complement F of F and of a set of bridges in F , with law described as above via the end-points of theexcursions η in F . learly, the first definition implies the second one, which implies the third one by Proposition 3 (thethird property for the first two definitions can be viewed as a “strong Markov property” of thesefields), but the converse is not true (the last definition allows the use of “external randomness” inthe definition of F (while the second does not), and the second one allows features of individualloop (while the first does not).3.4. Brownian loop-soup decompositions.
The previous results have almost identical counter-parts in the setting of oriented Brownian loop-soups with intensity α = 1 and unoriented Brownianloop-soups with intensity c = 1.Suppose that D is an open subset of R d such that the (Dirichlet) Green’s function in D is finite(away from the diagonal). Suppose that F and F are two disjoint compact sets in D , that areboth non-polar for Brownian motion (i.e. Brownian motion started away from these sets has anon-zero probability to hit them). Then, we can again define:(1) The law of unordered oriented Brownian bridges in D \ F from a finite family X =( x , . . . , x n ) of points to another such family Y = ( y , . . . , y n ), and the law of unorderedunoriented Z -Brownian bridges in D \ F from a finite family of points Z = ( z , . . . , z n ) toitself (in the latter case, points of Z are paired, like in the random walk case). This worksas long as all Green’s functions involved are finite (which is the case as soon as all x i (cid:54) = y j for all i, j , and that z i (cid:54) = z j for all i (cid:54) = j ).(2) The set η of N oriented (resp. unoriented) excursions of the loops in an oriented (resp.unoriented) loop-soup with intensity α = 1 (resp. c = 1) away from F , that reach F . Inthe ordered case, we call their endpoints vector X = ( X , . . . , X N ) and their starting pointvector Y , and in the unoriented case, we call Z = ( Z , . . . , Z N ) the extremity vector.Then, the Brownian counterparts of Proposition 1 and of Proposition 3 go as follows: Proposition 5. - For the soup of oriented Brownian loops with α = 1 : Conditionally on η , themissing pieces of the loops (that the pieces η are part of ) are distributed like an unordered Brownianbridge from X to Y in D \ F .- For the soup of unoriented Brownian loops with c = 1 : Conditionally on η , the missing piecesof the loops are distributed like an unordered unoriented Z -Brownian bridge in D \ F . And as before, one can derive the more symmetric results: For instance, if F and F are twodisjoint compact subsets of D , we can define the crossings from F to F and vice-versa in theoriented case, and the crossings between F and F in the unoriented case. When one conditions onthese crossings, one can then complete the picture with two conditionally independent unorderedoriented bridges (in the oriented case) or by two conditionally independent unordered unorientedbridges (in the unoriented case). We illustrate this result in Figures 5 and 6 (here we consider theoriented case, D is the rectangle, F is the small circle and F the large circle). Conditionally onthe points (and their status – square or circle depending on the orientation of the loops) on the twocircles, the three pictures in Figure 6 are independent (this is the oriented version of Figure 2).In the context of two-dimensional continuous systems, clusters of loops in a loop-soup are inter-esting to study, as pointed out in [18]; it has been proved in [16] that boundaries of such clustersfor c ≤ κ = κ ( c ), where κ (1) = 4. The CLE (and the SLE curves) is also known (see [15, 3]) to be related quite directly to the Gaussian FreeField. The role of the c = 1-clusters of loops in the framework of cable-systems and in relationto the Gaussian Free Field has been pointed out by Lupu [10] (the clusters provides a direct linkbetween the loop-soups and the Gaussian Free Field itself, rather than just to its square). Thepresent result sheds some light on the recently derived [14] decomposition of critical 2 d loop-soupclusters (for c = 1) in terms of Poisson point processes of Brownian excursions (we refer to [14] forcomments and questions). igure 5. Sketch of the oriented Brownian case: (i) The two oriented loops thattouch the two circles, (ii) keeping only the endpoints of these crossing on each circle,with trace of the orientation
Figure 6. (iii) The outer bridges joining each circle point to a square point, (iv)sampling the inner bridges joining each circle point to a square point, (v) the sixcrossings, joining a circle point to a square point. The final loops are oriented sothat the crossings from small to large circle go from a circle point to a square point4.
Resampling for continuous-time loop-soups, the GFF and random currents
We devote now a short separate section on the case of discrete continuous-time loop-soups, thathave been studied by Le Jan [9]. As we shall see, in that setting, it is natural to consider theconditioned distribution of the loop-soup (unoriented for c = 1 ie. α = 1 /
2, or oriented for α = 1)given the value of their local times on a given family of sites. Some of the results are very closelyrelated to Dynkin’s isomorphism theorem (ie. it will be a pathwise version of a generalization of it).Just as previously, we will describe the case of simple random walk on the graph where each point x has the same number g of outgoing edges, but the results can easily be generalized to the case ofgeneral Markov chains. Some of following considerations will be reminiscent of the arguments in[9] (sections 7 and 9 in particular). In the first subsections, we will focus on the case of unorientedloop-soups, and we will briefly indicate the similar type of results that one gets in the oriented case.4.1. Slight reformulation of the resampling property of the discrete loop-soup.
We canstart with the same setting as before, with the graphs Γ and D ⊂ Γ, the random walk on thisgraph killed upon hitting Γ \ D , and its Green’s function G D ( · , · ). In the previous sections, wechose for expository reasons (as this was for instance the natural preparation for the Browniancase) to study loops in the loop-soup that visit two different sets of sites F and F . But in fact,the following setting is a little more natural and more general: Consider now a family e , . . . , e n ofedges of D , and the graph D (cid:48) obtained by removing these n edges from D . We can now sample n unoriented loop-soup (for c = 1), and observe the numbers N , . . . , N n of jumps along thoses n unoriented edges. We now want to know the conditional distribution of the entire loop-soup giventhis information. In particular, we would like to know how these N + . . . + N n jumps are hookedtogether into loops (clearly, the loop-soup in D (cid:48) consisting of the loops that use none of these n edges is independent of N := ( N , . . . , N n )).We can associate to N the vector Z consisting of the 2 N + . . . + 2 N n endpoints of these jumps.Once we label them, we can as before the collection β of pairing and bridges that join them inthe loop-soup. Note that the bridge is allowed to contain no jump when one pairs two identicalend-points. We can also define the unordered bridge measures in D (cid:48) (corresponding to paths thatuse no edge of D \ D (cid:48) ) as before. Then, exactly as before, one can prove the following version ofthe resampling: Proposition 6.
The conditional distribution of β given N , . . . , N n is exactly the unordered unori-ented bridge measure B Z in D (cid:48) . Note that for some choices of family of edges e , . . . , e N , it can happen that an even number ofendpoints of the discovered jumps are at a certain vertex where no neighboring edge is in D (cid:48) . Inthat case, the bridge measure pairs these jumps at random and the corresponding bridge is anywaythe empty bridge from x to x . A trivial example is of course the case where e , . . . , e n are all theedges of D . Then, the proposition just says that the conditional distribution of the loops given theoccupation time measure is obtained by just pairing at random the incoming edges at each site.“Loops can exchange their hats uniformly at random at each site”.This reformulation makes it clear that in the discrete time setting, the Markov property of theoccupation time field is really a Markov property on the edges (which is not surprising, given thatthe field is actually naturally defined on the edges).4.2. Continuous-time loops.
Following Le Jan’s approach [9], we now introduce the associatedcontinuous-time Markov chain, for each site x , the chain stays an exponential waiting time of mean1 /g before jumping along one of the g outgoing edges chosen at random (for expository reasons, wedescribe this in the case where each edge has the same number of outgoing edges). Note that weallowed stationary edges in the graph, so that the continuous-time Markov chain can also “jump”along those (and we can keep track of these jumps, even if they do not affect the occupation timeat sites). As pointed out by Le Jan, the loop-soup of such continuous-time loops for α = 1 / x to y will be a path that jumps out of x at time 0 and jumps into y at the endpoint of the excursion). One can the introduce the naturalexcursion measure µ Ax,y , which is the natural measure on set of unoriented excursions that go from x to y while avoiding all the points in A (it corresponds to the discrete excursion measure that putsa mass g − n to such an excursion with n jumps, and one then adds n − n −
1) points inside the excursions.One can view the continuous-time Markov chain as the limit when M → ∞ of the discrete-timeMarkov chain on a graph D M , where one has added to each site x , M stationary edges from x o itself (when one renormalizes time by 1 /M , the geometric number of successive jumps alongthese added stationary edges from x to x before jumping on another edge, does converges to theexponential random variables) – this approach is for instance used in [19] in order to derive theproperties of the continuous-time chains and loop-soups from the properties of the discrete-timeloop-soups. Let us now consider a finite set of points x , . . . , x n in the graph, and for a given M ,we condition on the N , . . . , N n of jumps by the loop-soup along the stationary unoriented edges e , . . . , e n . More precisely N will denote the total number of jumps in the loop-soup along the M added stationary edges from x to x . Note that because both end-points of a stationary edge arethe same, these N jumps correspond to 2 N jump-endpoints, that are all at x . We can now applyProposition 6 to this case; this describes the distribution of how to complete and hook up these N + . . . + N n jumps into unoriented loops in order to recover the loops in the loop-soup that theycorrespond to. One has to pair all these 2 N + . . . + 2 N n endpoints.Mind that as M gets large, the mass of the trivial excursion from x to x with zero life-time isalways 1, while the mass of (unoriented) excursions with at least one jump along the “non-added” nM stationary edges neighboring these points from x to some x j that stays away from { x , . . . , x n } during the entire positive lifetime (if it is positive) will be of order 1 /M (unless all neighbors of x are in { x , . . . , x n } in which case this quantity is zero) and that the set of excursions from x to x j that visit at least one of the points of { x , . . . , x n } during its positive life-time is of the order of O (1 /M ) . It is a simple exercise that we safely leave to the reader to check that in the M → ∞ limit, the discrete Markovian description becomes the following: Proposition 7.
If we consider the continuous-time Markov chain loop-soup and condition on thetotal occupation time l ( x ) , . . . , l ( x n ) at the n points x , . . . , x n , then the unoriented excursionsaway from this set of points by the loop-soup will be distributed exactly like a Poisson point processof excursions with intensity µ l = (1 / × (cid:80) i ≤ j l ( x i ) l ( x j ) µ x ,...,x n x i ,x j conditionned on the event that thenumber of excursions starting or ending at each of the n points x , . . . , x n is even. The particular case where the set of points { x , . . . , x n } is the whole vertex set is again of someinterest: The conditional distribution of the number of unoriented jumps on the edges given theoccupation time field on the vertices is a collection of independent Poisson random variables withrespective means l ( x i ) l ( x j ), conditioned by the event that for all site x , the total number of jumpson the incoming edges at x is even. This is exactly the random current distribution associated withthe Ising model. For some further comments on this relation with random currents, the GFF andIsing, we refer to [11].4.3. Relation with Dynkin’s isomorphism.
It should be of course noted that this decompo-sition is closely related Dynkin’s isomorphism (see [4, 5, 13] and the references therein), exceptthat one here conditions here on the value of the square of the GFF instead of the value of theGFF itself. The previous result implies (when one only looks at occupation times and not at theloop-soup itself) that conditionally on the value of the square of the GFF at the set of points { x , . . . , x n } , the square of the value of the GFF at the other points is the sum of the occupationtimes of the conditioned Poisson point process of excursions with an independent squared GFF inthe remaining (smaller) domain.If one however conditions the GFF at the n sites to be all equal to the same value t , then onecan consider instead a graph where all these points are identified as a single point and note thatwhen the GFF on the new graph conditioned to have value t at that point is distributed as theGFF on the initial graph, conditioned to have value t at each of the n points. One can applythe previous statement to that new graph and note that the conditioning on the event that thenumber of excursions-extremities at each boundary site is even then disappears, because when thereis just one such site, this number is anyway even (each excursion from this point to itself has twoendpoints). Here it is however essential that the signs of all these values are the same (because if ne identifies them into a single point, then they will anyway correspond to the same value of theGFF, not just to the same value of its square.In summary, conditioning by the value of the square of the GFF gives rise to the parity condition-ing, but it is also possible to condition on the actual value of the GFF and the parity conditioningbecomes irrelevant when one looks at the occupation times only. Note that Dynkin’s isomorphismthen follows, because in the latter case, the conditional distribution of the square of the GFF atthe other points (which is therefore the square of the GFF in this smaller domain with boundaryconditions given by these conditioned boundary values) will be the sum of the contribution of theloops that only visit those points (which is a squared GFF in the remaining domain) with the oc-cupation time of the Poisson point process of excursions, while the conditioned GFF is a GFF withsome prescribed boundary conditions, that can be viewed as the sum of a GFF in the complementof the set of marked points with the deterministic harmonic extension of these boundary values.4.4. The oriented case.
One can follow almost word for word the same strategy to study theconditional distribution of oriented continuous-time loop-soups at α = 1 given their cumulatedlocal time at sites. In that case, the excursions will be oriented, and the conditional distributionof the excursions away from these points will be a Poisson point process conditioned on the eventthat for each site, the number of incoming excursions is equal to the number of outgoing ones.The particular case where the set of points is the whole vertex set is again interesting. Theconditional distribution of the set of jumps will be independent Poisson on each oriented edge, butconditioned on the fact that the number of incoming jumps at each site is going to its number ofoutgoing jumps. We leave all the details and further results to the interested reader. Note.
We found out that the recently posted preprint [2] describes some ideas that are similar tothe present paper. Our work was carried out totally independently of [2].
Acknowledgements.
The support and hospitality of SNF grant SNF-155922, NCCR-Swissmap,of the Clay foundation and the Isaac Newton Institute in Cambridge (where the present work hasbeen carried out) are gratefully acknowledged.
References [1] D. Brydges, J. Fr¨ohlich, T. Spencer. The Random Walk Representation of Classical Spin Systems and CorrelationInequalities, Commun. Math. Phys. 83, 123-150, 1982.[2] F. Camia and M. Lis. Non-backtracking loup soups and statistical mechanics on spin networks, preprint, 2015.[3] J. Dub´edat. SLE and the free field: partition functions and couplings. J. American Math. Soc., 22, 995-1054,2009.[4] E.B. Dynkin. Markov processes as a tool in field theory. J. Funct. Anal. 50, 167 187, 1983.[5] E.B. Dynkin. Gaussian and non-Gaussian random fields associated with Markov processes, J. Funct. Anal. 55,344-376, 1984.[6] G.F. Lawler, The loop-erased random walk, in Kesten Festschrift, Perplexing Problems in Probability, Progressin Probability 44, 197-217, 1999.[7] G.F. Lawler and V. Limic. Random walks. A modern introduction. CUP, 2010.[8] G. F. Lawler and W. Werner. The Brownian loop soup. Probability Theory and Related Fields, 128, 565-588,2004.[9] Y. Le Jan. Markov Paths, Loops and Fields. L.N. in Math, 2026, 2011.[10] T. Lupu. From loop clusters and random interlacement to the free field, Ann. Probab., to appear.[11] T. Lupu and W. Werner. A note on Ising random currents, Ising-FK, loop-soups and the GFF, Preprint, 2015.[12] E. Nelson. The free Markoff field, J. Funct. Anal. 12, 211-227, 1973.[13] M.B. Marcus and J. Rosen. Markov Processes, Gaussian Processes, and Local Times. Cambridge UniversityPress, 2006,[14] W. Qian and W. Werner. Decomposition of two-dimensional loop-soup clusters, preprint.[15] O. Schramm and S. Sheffield. A contour line of the continuous Gaussian free field, Probab. Th. and rel. Fields157, 47-80, 2013.
16] S. Sheffield and W. Werner. Conformal Loop Ensembles: The Markovian characterization and the loop-soupconstruction. Ann. Math., 176, 1827–1917, 2012.[17] K. Symanzik. Euclidean quantum field theory. In: Local quantum theory. Jost, R. (ed.), Academic Press 1969[18] W. Werner. SLEs as boundaries of clusters of Brownian loops. Comptes Rendus Math´ematique. Acad´emie desSciences. Paris, 337, 481–486, 2003.[19] W. Werner. Topics on the Gaussian Free Field, Lecture Notes, 2014.[20] D.B. Wilson. Generating random spanning trees more quickly than the cover time, Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, ACM, 296-303 (1996)
Department of Mathematics, ETH Z¨urich, R¨amistr. 101, 8092 Z¨urich, Switzerland
E-mail address : [email protected]@math.ethz.ch