Free boundary dimers: random walk representation and scaling limit
FFree boundary dimers:random walk representation and scaling limit
Nathana¨el Berestycki ∗ Marcin Lis ∗ Wei Qian † February 26, 2021
Abstract
We study the dimer model on subgraphs of the square lattice in which vertices on a prescribedpart of the boundary (the free boundary) are possibly unmatched. Each such unmatched vertexis called a monomer and contributes a fixed multiplicative weight z > z >
0, the scaling limitof the height function is the Gaussian free field with Neumann (or free) boundary conditions,thereby answering a question of Giuliani et al.
Contents Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 ∗ Universit¨at Wien † ICNRS and Laboratoire de Math´ematiques d’Orsay, Universit´e Paris-Saclay a r X i v : . [ m a t h . P R ] F e b .6 Random walk representation of D − . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.7 Convergence to potential kernel of the auxiliary walk . . . . . . . . . . . . . . . . . . 26 Let G = ( V, E ) be a finite, connected, planar bipartite graph (in our analysis we will actually onlyconsider subgraphs of the square lattice Z ). Let ∂ G be the set of boundary vertices, i.e., verticesadjacent to the unique unbounded external face, and let ∂ free G ⊆ ∂ G be a fixed set called the freeboundary . A boundary monomer-dimer cover of G is a set M ⊆ E such that • each vertex in V \ ∂ free G belongs to exactly one edge in M , • each vertex in ∂ free G belongs to at most one edge in M .We write mon( M ) ⊆ ∂ free G for the set of vertices that do not belong to any edge in M , and callits elements monomers . Let MD ( G ) be the set of all boundary monomer-dimer covers of G . Wewill often call such configurations simply monomer-dimer covers, keeping in mind that monomersare only allowed on the free boundary. Finally let D ( G ) be the set of all dimer covers , i.e.monomer-dimer covers M such that mon( M ) = ∅ .We assign to each edge e ∈ E a weight w e ≥
0, and to each vertex v ∈ ∂ free G a weight z v ≥ dimer model with a free boundary (or free boundary dimer model ) is a random choiceof a boundary monomer-dimer cover from MD ( G ) according to the following probability measure: P ( M ) = 1 Z (cid:89) e ∈ M w e (cid:89) v ∈ mon( M ) z v , where Z is the normalizing constant called the partition function . For convenience we will alwaysassume that the graph is dimerable meaning that D ( G ) (cid:54) = ∅ . In this work we will only focus on the2omogeneous case w e = 1, for all e ∈ E , and z v = z > v ∈ ∂ free G (with the exception ofthe technical assumption on the weight of corner monomers described in the next section). Then P ( M ) = 1 Z z | mon( M ) | . (1.1)The dimer model on G can be now defined as the free boundary dimer model conditioned on D ( G ), i.e., the event that there are no monomers.The main observable of interest for us will be the height function of a boundary monomer-dimer cover which is an integer-valued function defined (up to a constant) on the bounded facesof G . Its definition is identical to the one in the dimer model (see [38]). We simply note thatthe presence of monomers on the boundary does not lead to any topological complication (i.e.,the height function is not multivalued): if u and u (cid:48) are two faces of the graph, and γ and γ (cid:48) aretwo distinct paths in the dual graph connecting u and u (cid:48) , the loop formed by connecting γ and γ (cid:48) (in the reverse direction) does not enclose any monomer. More precisely, we view a configuration M ∈ MD ( G ) as an antisymmetric flow (in other words a 1-form) ω M on the directed edges of G in the following manner: if e = { w, b } ∈ M , then ω M ( w, b ) = 1 and ω M ( b, w ) = − b is theblack vertex of e and w its white vertex (since G is bipartite, a choice of black and white verticescan be made in advance). Otherwise, we set ω M ( e ) = 0. Equivalently, we may view ω M as anantisymmetric flow on the directed dual edges, where if e † is the dual edge of e (obtained by acounterclockwise π/ e ), then ω M ( e † ) = ω M ( e ). To define the height function we stillneed to fix a reference flow ω which we define to be ω = E [ ω M ], i.e., the expected flow of M under the free boundary dimer measure. Now, if u and u (cid:48) are two distinct (bounded) faces of G ,we simply define h ( u ) − h ( u (cid:48) ) = (cid:88) e † ∈ γ ( ω M ( e † ) − ω ( e † ))where γ is any path (of dual edges) connecting u to u (cid:48) . This definition does not depend on the choiceof the path since the flow ω M ( e † ) − ω ( e † ) is closed (sums over closed dual paths vanish), and henceyields a function h up to an additive constant, as desired. Note that our choice of the referenceflow automatically guarantees that the height function is centered, i.e., E ( h ( u ) − h ( u (cid:48) )) = 0 for allfaces u and u (cid:48) .We finish this short introduction to the free boundary dimer model with a few words on itshistory and the nomenclature. In the original model studied in [17, 18] monomers could occupyany vertex of the graph, and hence the name monomer-dimer model . This generalization posestwo major complications from our point of view. Firs of all, the height function is not well defined,and secondly the model does not admit a Kasteleyn solution as was shown in [19]. From thispoint of view, it would therefore be natural if the version of the model studied here was calledthe boundary-monomer-dimer model. However we choose to use the less cumbersome name of freeboundary dimers. We now state conditions on the graph G = ( V, E ) which will be enforced throughout this paper.First, we assume that G is a subgraph of the square lattice Z , and without loss of generality that0 ∈ V and is a black vertex. This fixes a unique black/white bipartite partition of V . We alsoassume that 3 V is contained in the upper half plane H = { z ∈ C : (cid:61) ( z ) ≥ } . • ∂ free G = V ∩ R , so the monomers are allowed only on the real line. Furthermore, we assume ∂ free G is a connected set of vertices. The leftmost and rightmost vertices of V ∩ R = ∂ free G will be referred to as the monomer-corners of G . • G has at least one black dimer-corner and one white dimer-corner (where a dimer-corner isa vertex v ∈ V that is not a monomer-corner, and is adjacent to the outer face of G , and hasdegree either 2 or 4 in G ).See Figure 2 for an example of a domain satisfying these assumptions (ignore the bottom rowof triangles for now, which will be described later). We make a few comments on the role of thelast assumption that there are corners of both colours. For this it is useful to make a parallel withKenyon’s definition of Temperleyan domain [21, 22]. In that case, this condition ensured thatthe associated random walk on one of the four possible sublattices of Z (the two types of blackand the two types of white vertices) was killed somewhere on the boundary. As we will see, in ourcase the random walk may change the lattice from black to white when it is near the real line,resulting in only two different types of walks. Then the role of the third assumption (at least onedimer-corner of each type) is to ensure that each of the two walks is killed on at least some portionof the boundary (possibly a single vertex). This follows from an observation that the boundarycondition of a walk on a black (resp. white) sublattice changes from Neumann to Dirichlet (andvice-versa) at a white (resp. black) corner. See Figure 4 for an example of a vertex with Neumannand Dirichlet bondary conditions. The free boundary dimer model as defined above was discussed (with minor modifications) in apaper of Giuliani, Jauslin and Lieb [15]. It was shown there that the partition function Z canbe computed as a Pfaffian of a certain matrix. Furthermore, a bijection was provided to a non-bipartite dimer model (the authors indicate that this bijection was suggested by an anonymousreferee). Hence using Kasteleyn theory the correlation functions can be expressed as Pfaffians ofthe inverse Kasteleyn matrix K − . The bijection, which is a central tool of our analysis, will bedefined in Section 2 where we will also recall the precise definition of the Kasteleyn matrix K .We will now state our first main result which gives a random walk representation for K − .Suppose that G is a graph satisfying the assumptions from the previous section. Fix z > z to every monomer on ∂ free G except at either monomer-corner, where (for technicalreasons which will become clear in the proof) we choose the weight to be z (cid:48) = z (cid:114) z . (1.2)For k ∈ N = { , , . . . } , let us call V k = V k ( G ) = { v ∈ V : (cid:61) ( v ) = k } , so ∂ free G = V , where (cid:61) ( v )denotes the imaginary part of the vertex v seen as a complex number given by the embedding ofthe graph. Let us call V even = V even ( G ) = V ∪ V ∪ . . . and V odd = V odd ( G ) = V ∪ V ∪ . . . . Theorem 1.1 (Random walk representation of the inverse Kasteleyn matrix) . There exist tworandom walks Z even and Z odd on the state spaces V even ( G ) and V odd ( G ) respectively, whose transitionprobabilities will be described in Section 2.6 (see (2.23) and (2.26) ), such that the following holds. onsider the monomer-dimer model on G where the monomer weight is z > on V ( G ) except atits monomer-corners where the monomer weight is z (cid:48) , as defined in (1.2) . Let K be the associatedKasteleyn matrix, and D = K ∗ K , so that K − = D − K ∗ . Then for all u, v ∈ V , we have D − ( u, v ) = G odd ( u, v ) if u, v ∈ V odd , ( − (cid:60) ( u − v ) G even ( u, v ) if u, v ∈ V even , otherwise. (1.3) where G even , G odd are the Green’s functions of Z even and Z odd respectively, normalised by D ( v, v ) . Here, by normalised Green’s function of a random walk (with at least one absorbing state), wemean G ( u, v ) = 1 D ( v, v ) E u (cid:16) ∞ (cid:88) k =0 { Z k = v } (cid:17) , where Z is the corresponding random walk. We now specify a few properties of the random walks Z even and Z odd which may be interesting to the reader already, even though the exact definitionis postponed until Section 2.6. Both Z even and Z odd behave like simple random walk away (atdistance more than 2) from the boundary vertices, but with jumps of size ±
2, so the parity of thewalk does not change. Both have nontrivial boundary conditions, including some reflecting andabsorbing boundary arcs along the non-monomer part of the boundary ∂ G \ ∂ free G . Furthermore,both walks are allowed to make additional jumps along their bottommost rows of vertices ( V for Z ev and V for Z odd ). These jumps are symmetric, bounded in the even case but not in the oddcase (although they do have exponentially decaying tail). Hence in the scaling limit, these walkswould converge to Brownian motion in the upper half plane H with reflection on the real axis andwith whatever boundary conditions are inherited from the Neumann/Dirichlet parts of the otherboundary arcs.This intuition is what guides us to the next result, which however requires us to first take aninfinite volume (thermodynamic) limit where an increasing sequence of graphs eventually covers H ∩ Z . We first show that the monomer-dimer model converges in such a limit. For this weneed to specify a topology: we view a monomer-dimer configuration on H ∩ Z as an element of { , } E ( H ) where E ( H ) is the edge set of Z ∩ H , and equip this space with the product topology(so convergence in this space corresponds to convergence of local observables).To state the result we will fix a sequence G n of graphs such that G n satisfies the assumptions ofSection 1.2, and moreover G n ↑ Z ∩ H . For simplicity of the arguments and ease of presentation,we have chosen G n to be a concrete approximation of rectangles, although the result is in fact truemuch more generally; we have not tried to find the most general setting in which this applies. Theorem 1.2 (Infinite volume limit) . Let G n be rectangles of diverging odd sidelengths (number ofvertices on a side) whose ratio remains bounded away from zero and infinity as n → ∞ , and suchthat in the top row the right-hand side half of the vertices is removed. Let µ n denote the law ofthe free boundary dimer model on G n with monomer weight z > except at the monomer-cornerswhere the weight is z (cid:48) , as in (1.2) . Then µ n converges weakly as n → ∞ to a law µ which describesa.s. a random monomer-dimer configuration on Z ∩ H . We note that the particular type of domains chosen in this statement guarantees that both theodd and even walks mentioned above are killed on a macroscopic part of the upper rows of G n µ depends on the monomer weight z >
0. As mentioned before, we can associate to the monomer-dimer configuration in the infinitehalf-plane a height function which is defined on the faces of H ∩ Z , up to a global additive constant.The last main result of this paper shows that in the scaling limit, this height function converges toa Gaussian free field with
Neumann (or free) boundary conditions , denoted by Φ
Neu . Wewill not define this in complete generality here (see [7] for a comprehensive treatment). We willsimply point out what is concretely relevant for the theorem below to make sense. Given a simplyconnected domain Ω with a smooth boundary, Φ
NeuΩ may be viewed as a stochastic process indexedby the space D (Ω) of smooth test functions f : Ω → R with compact support and with zeroaverage (meaning (cid:82) H f ( z ) dz = 0). The latter requirement corresponds to the fact that Φ is onlydefined modulo a global additive constant. The law of this stochastic process is characterised by arequirement of linearity (i.e. (Φ NeuΩ , af + bg ) = a (Φ NeuΩ , f ) + b (Φ NeuΩ , g ) a.s. for any f, g ∈ D (Ω) and a, b ∈ R ), and moreover (Φ NeuΩ , f ), (Φ
NeuΩ , g ) follow centered Gaussian distributions with covarianceCov((Φ
NeuΩ , f ) , (Φ NeuΩ , g )) = (cid:90) (cid:90) Ω f ( x ) g ( y ) G NeuΩ ( x, y ) dxdy, where G NeuΩ ( x, y ) is a Green’s function in Ω with Neumann boundary conditions. (Note that bycontrast to the Dirichlet case, such Green’s functions are not unique and are defined only up to aconstant.) In the case of the upper-half plane Ω = H , the Green’s function is given explicitly by G Neu H ( x, y ) = − log | x − y | − log | x − ¯ y | . Informally, pointwise differences Φ
Neu H ( a ) − Φ Neu H ( b ) for a, b ∈ H (which do not depend on thechoice of the global additive constant) are centered Gaussian random variables with covariances E [(Φ Neu H ( a i ) − Φ Neu H ( b i ))(Φ Neu H ( a j ) − Φ Neu H ( b j ))] = − log (cid:12)(cid:12)(cid:12)(cid:12) ( a i − a j )( b i − b j )(¯ a i − a j )(¯ b i − b j )( a i − b j )( b i − a j )(¯ a i − b j )(¯ b i − a j ) (cid:12)(cid:12)(cid:12)(cid:12) . (1.4)Note that our Green’s function is normalised so that it behaves like 1 × log(1 / | x − y | ) as y − x → δ > h δ denote the heightfunction (defined up to a constant) of the free boundary dimer model µ with weight z in the infinitehalf-plane H ∩ δ Z (rescaled by δ ). We identify h δ with a function defined almost everywhere on H by taking the value of h δ to be constant on each face, and view h δ as a random distribution(also called a random generalized function) acting on smooth compactly supported functions f on H with zero average, i.e., satisfying (cid:82) H f ( a ) da = 0 (see Section 5.5 for details). Theorem 1.3 (Scaling limit) . Let f , . . . , f k ∈ D ( H ) be arbitrary test functions. Then for all z > , as δ → , ( h δ , f i ) ki =1 → (cid:16) √ π Φ Neu H , f i (cid:17) ki =1 in distribution. Note that, maybe surprisingly, the scaling limit does not depend on the value of z > / ( √ π ) in front of Φ on the right-hand side of Theorem 1.3. It is equalto the one appearing in the usual dimer model in which the centered height function has zero(Dirichlet) boundary conditions. We note that comparisons with other works such as [22, 5] shouldbe made carefully, since the normalisation of the Green’s function and of the height functionmay not be the same: for instance, Kenyon takes the Green’s function to be normalised so that G ( x, y ) ∼ / (2 π ) log 1 / | x − y | as y → x , so his GFF is 1 / √ π ours (ignoring different boundaryconditions). Also, in Kenyon’s work [21], the height function is such that the total flow out of avertex is 4 instead of 1 here (so his height function is 4 times ours), while it is 2 π in [5] (so theirheight function is 2 π times ours). Adjusting for these differences, there is no discrepancy betweenthe constant 1 / ( √ π ) on the right-hand side of Theorem 1.3 and the one in [21] and [22]. As noted before, Theorem 1.3 may be surprising at first sight, when we consider the behaviour ofthe model in the two extreme cases z = 0 and z = ∞ . Indeed, when z = 0, the free boundarydimer model obviously reduces to the dimer model on H , in which case the limit is a DirichletGFF. When z = ∞ , all vertices of V are monomers, so the model reduces to a dimer model on( V ∪ V ∪ . . . ) (cid:39) H ∩ Z . Hence, the limit is also a Dirichlet GFF in this case. However, the resultabove says that for any z strictly in between these two extremes, the limit is a Neumann GFF.The result (and the reason for this arguably surprising behaviour) may be heuristically under-stood through the following reflection argument . Let G be a large finite graph approximating H and satisfying the assumptions of Section 1.2. Let ˜ G be a copy of G shifted by i/
2, so with asmall abuse of notation, ˜ G = G + i/
2, and let ¯ G be the same graph to which we add its conjugate(reflection through the real axis). We also add vertical edges crossing the real axis of the form( k − i/ , k + i/
2) for each k ∈ V . Given a monomer-dimer configuration on G , we can readilyassociate a monomer-dimer configuration on ¯ G by reflecting it in the same manner. In this way, amonomer in k + i/ k − i/ k ∈ V . Such a pairof monomers can be interpreted as a dimer on the edge ( k − i/ , k + i/
2) and once we have phrasedit this way the resulting configuration is just an ordinary dimer configuration on ¯ G (which howeverhas the property that it is reflection symmetric). It follows that its height function (defined on thefaces of ¯ G ) is even, i.e., h ( f ) = h ( ¯ f ) for every face f (where ¯ f is the symmetric image of f aboutthe real axis). Moreover, a moment of thought shows that monomer-dimer configurations on G arein bijection in this manner with the set ED ( ¯ G ) of even (symmetric) dimer configurations on ¯ G , andthat under this bijection the image of the law (1.1) is given by P ( M ) = 1¯ Z z | mon( M ) | (1.5)(where for a dimer configuration M ∈ D ( ¯ G ), mon( M ) is the set of vertical edges of M crossing thereal axis), conditioned on the event ED ( ¯ G ) of being even, where ¯ Z is the partition function of thedimer model on ¯ G .Now, suppose e.g. that G is such that ¯ G is piecewise Temperleyan [34] (meaning that ¯ G has twomore white convex corners than white concave corners). This happens for instance if G is a largerectangle with appropriate dimensions. By a result of Russkikh [34], in this case and if z = 1, the(centered) height function associated with the dimer model (1.5) converges to a Gaussian free fieldwith Dirichlet boundary condition in the scaling limit.7t is reasonable to believe that this convergence holds true even when z (cid:54) = 1. For instance, whenthe monomer weights alternate between z and 1 every second vertex, then whatever the value of z ,the dimer model has a Temperleyan representation (see [25], [4]). Then by considerations relatedto the imaginary geometry approach (see [5]), this convergence to the Dirichlet GFF is universalprovided that the underlying random walk converges to Brownian motion (this will be rigourouslyproved in the forthcoming work [6]). In particular, given these results, we should get convergenceto the Dirichlet GFF for the height function even when z (cid:54) = 1: indeed, when we modify the weightof all the edges crossing the real line, random walk will still converge to Brownian motion. So far,this discussion concerned the (unconditioned) dimer model on ¯ G defined in (1.5). Once we startconditioning on ED ( ¯ G ) it might be natural to expect that the scaling limit should be a “DirichletGFF conditioned to be even”, though this is a highly degenerate conditioning. Nevertheless, thisconditioning makes sense in the continuum, and in fact its restriction to the upper half plane givesthe Neumann GFF, as we are about to argue. Indeed, for a full plane GFF Φ C restricted to H , itis easy to check that one has the decompositionΦ C = √ (Φ Neu H + Φ Dir H ) (1.6)where Φ Neu H , Φ Dir H are independent fields on H with Neuman and Dirichlet boundary conditions on R respectively. This follows immediately from the fact that any test function can be written asthe sum of an even and odd functions, and this decomposition is orthogonal for the Dirichlet innerproduct ( · , · ) ∇ on D ( C ). Therefore, conditioning Φ C to be even amounts to conditioning on Φ Dir H to vanish everywhere, meaning that Φ C (restricted to the upper half plane) is exactly equal toΦ Neu H / √
2. (See Exercise 1 of Chapter 5 in [7] for details.)We note that while this argument correctly predicts the Neumann GFF as a scaling limit of theheight function, it is however also somewhat misleading as it suggests that the limit of h δ is not(1 / √ π )Φ Neu H as in Theorem 1.3, but is smaller by a factor 1 / √
2, i.e., 1 / (2 π )Φ Neu H .To understand this discrepancy, we now explain why the additional factor turns out to bean artifact of a Gaussian computation and does not arise in the discrete setup. A convincingone-dimensional parallel can be that of Gaussian and simple random walk bridges. Indeed, con-sider bridges of 2 n steps starting and ending at 0, with symmetric Bernoulli and Gaussian jumpdistributions with variance one. Now condition the walks to be symmetric around time n , i.e. X ( n ± k ) = X ( n ∓ k ). Again, the Gaussian conditioning is singular but can be easily made senseof using Gaussian integrals. Restricted to the time interval [0 , n ], the conditioned simple randomwalk bridge is just a simple random walk with the same step distribution as the original bridge.However, the conditioned Gaussian walk has step distribution with variance 1 / / √ In the study of the dimer model, a well known conjecture of Kenyon concerns the superpositionof two independent dimer configurations. It is easy to check that such a superposition results in a8 igure 1:
Left: A superposition of two monomer-dimer configurations, respectively blueand red. Double edges are in purple. The boundary-touching level lines of the height-function is the collection of arcs joining monomers to monomers marked in bald black.Right: A simulation of ALE by B. Werness. collection of loops (including double edges) covering every vertex. This observation is attributedto Percus [31]. These loops are the level lines of the difference of the two corresponding dimerheight functions. Kenyon’s conjecture (stated somewhat informally in [24] for instance) is that theloops converge in the scaling limit to CLE , the conformal loop ensemble with parameter κ = 4(defined in [35], see also [36]). This is strongly supported by the fact that in the continuum, CLE can be viewed as the level lines of a (Dirichlet) GFF with a specified variance (a consequence ofa well known coupling between the GFF and CLE of Miller and Sheffield, see [1] for a completestatement and proof). Major progress has been made recently on this conjecture through the workof Kenyon [23], Dub´edat [11] and Basok and Chelkak [3], and the only remaining ingredient of thefull proof is to show precompactness of the family of loops in a suitable metric space.It is natural to ask if any similar phenomenon occurs when we superpose two independentmonomer-dimer configurations sampled according to the free boundary dimer model, say in theupper half-plane. For topological reasons, this gives rise to a gas of loops as above but also acollection of curves connecting monomers to monomers (and hence the real line to the real line).See Figure 1 for an example. An obvious question is to describe the law of this collection of curvesin the scaling limit. By analogy with the above, and in view of our result (Theorem 1.3), it isnatural to expect that these curves converge in the scaling limit to the level lines of a GFF withNeumann boundary conditions on the upper-half plane. The law of these curves was determined byQian and Werner [33] to be the ALE process (ALE stands for Arc Loop Ensemble. It is a collectionof arcs that can be connected into loops, but here we will not be interested in this aspect and willonly see them as arcs.). ALE is one possible name for this set, but more precisely it is equal to thebranching SLE ( − , −
1) exploration tree targeting all boundary points, and is also equal to the(gasket of) BCLE ( −
1) in [30] and A − λ,λ in [1].This leads us to the following conjecture: Conjecture 1.4.
For any z > , in the scaling limit, the collection of boundary-touching curvesresulting from superimposing two independent free boundary dimer models converges to the ArcLoop Ensemble
ALE in the upper half-plane. .6 Folding the dimer model onto itself The discussion in Sections 1.4 and 1.5 lead naturally to another conjecture which we now spellout. In Section 1.5 we explained a conjecture pertaining to the superposition of two independent monomer-dimer configurations sampled according to the free boundary dimer model. But thereis at least one other natural way to superpose two such configurations that are not independent:namely, when they come from the same full plane dimer model. In fact, there are two ways to dothe folding, depending on whether we shift by i/ G which is obtained by adding to G its reflection with respect to the real axis. The vertices of G on the real axis (i.e., V ) are notreflected: we only keep one copy of them in ˆ G . (By contrast, in the graph ¯ G , where G is shifted by i/ M on ˆ G , viewed as a subset of edges where everyvertex has degree 1, and consider the superposition ˆΣ obtained by superposing M with itself via areflection through the real line: thus, ˆΣ = M | H ∪ ( − M ) | − H . Then ˆΣ is a subgraph of degree two (including double edges), except for vertices on V ⊂ R whichin M are connected to a vertical edge. Thus ˆΣ is exactly of the same nature as the graph inFigure 1. It is not hard to see that the “height function” h ˆΣ (really defined only up to a globaladditive constant) naturally associated with ˆΣ converges in the fine mesh size limit to (1 /π )Φ Neu H :this is because at the discrete level, the corresponding height function h ˆΣ ( f ) at a face f ⊂ H can beviewed as h M ( f ) + h M ( ¯ f ) (where h M is the height function associated with M ), and h M is knownto converge to (1 / √ π )Φ C [9]. These considerations lead us to the following conjecture: Conjecture 1.5.
In the scaling limit, the collection of boundary-touching curves in ˆΣ converges tothe Arc Loop Ensemble
ALE in the upper half-plane.
We remark that it is also meaningful to fold a dimer configuration on ¯ G (rather than ˆ G above)onto itself via reflection through the real line. In that case, one must erase the vertical edgesstraddling the real line and view the corresponding dimers as pairs of monomers. The resultingsuperposition ¯Σ is a subgraph of degree two, including multiple edges and double points (on V ⊂ R + i/ h ¯Σ associated to ¯Σ maybe viewed as h M ( f ) − h M ( ¯ f ) and so converges in the scaling limit towards (1 /π )Φ Dir H . Analogouslyto Conjecture 1.5, we conjecture that the loops of ¯Σ converge to CLE . This paper is organized as follows: In Section 2 we first describe and slightly generalize the bijectionfrom [15] between the free boundary dimer model on G and the standard dimer model on anaugmented (nonbipartite) graph G (as in Figure 2), which is the starting point of our paper. Wethen define a Kasteleyn orientation on G , the associated Kasteleyn matrix ˜ K , and finally we choosea convenient complex-valued gauge changed Kasteleyn matrix K (this gauge is closely related tothe one of Kenyon [21] and allows one to interpret K as a discrete Dirac operator). Kasteleyntheory (which we recall later on in the paper) says that the correlations of the dimer model on G (and hence also of the free boundary dimer model on G ) can be computed from the inverseKasteleyn matrix K − . 10 ection 2 With an intention of developing its random walk representation, we therefore beginanalyzing the inverse Kasteleyn matrix when G is a subgraph of the square lattice with appropriateboundary conditions described in Section 1.2. To this end we look at the matrix D = K ∗ K , whoseoff-diagonal entries we interpret as (signed) transition weights. These weights away from ∂ free G (which is a subset of the real line) are positive and hence define proper random walks as in [21].However, the description of D as a Laplacian matrix associated to a random walk breaks downcompletely for vertices on the three bottommost rows of G (as in Figure 3). We stress the fact thatthe level of complication is considerably higher for transitions between odd rows (that will lead tothe definition of the walk Z odd ). Indeed, as mentioned in Figure 3, for even rows the arising walk Z even can be relatively easily understood as a proper random walk reflected on the real line aftertaking into account a global sign factor appearing in D (which leads to the formula in the secondline of (1.3)).Therefore the remainder of Section 2 is devoted to the random walk representation for K − ,which is one of the main contributions of this paper. The main idea is to “forget” the steps of thesigned walk induced by D taken along the row V − , or more precisely to only specify the trajectoryof a path away from V − and combine together all paths that agree with this choice. The hopeis that the resulting projected signed measure on trajectories contained in V odd = V ∪ V ∪ . . . with (unbounded) steps from V to V is actually a true probability measure. Remarkably (in ouropinion), we show that this is indeed the case; this phenomenon is what really lies behind therandom walk representation of Theorem 1.1. To achieve this, an additional (intermediate) limitingprocedure is required. To be precise, we first pretend that the rows V and V − of G are infinite.This is done by defining graphs G N , where 2 N additional triangles are appended on both sidesof G , and then taking the limit N → ∞ . This allows us to perform exact computations for thetransition weights from V to V by analysing the potential kernel of the auxiliary one-dimensionalwalk on Z defined in Section 2.5. The required positivity of the combined weights and the identitystating that these weights sum to one as we sum over all possible jump locations, stated in Lemma2.3, is the result of an exact (and rather long) computation involving the potential kernel of thisauxiliary walk.This intermediary limit is also the technical reason for the introduction of the modified monomerweight z (cid:48) , which arises as the limiting weight of the peripheral monomers on G . Finally, inSection 2.6 we use the notion of Schur complement of a matrix as a convenient tool to implementthe idea of combining all the walks with given excursions away from (the now infinite) row V − . Allin all, at the end of Section 2 a random walk representation of K − is developed and Theorem 1.1is proved. Section 3.
The goal of this section is to prove Theorem 1.2, i.e., to establish the infinite volumelimit of the model when a sequence of graphs exhausts H ∩ Z . By Kasteleyn theory, it is enough thatthe inverse Kasteleyn matrix has a limit. This will be shown using the random walk representationestablished in Section 2. Essentially, the main goal is to show that in the infinite volume limit,the difference of the Green function associated to the random walk Z even or Z odd at two fixedvertices x, y converge to the difference of the potential kernel of the corresponding infinite volumewalk. In fact, the very definition of this potential kernel is far from clear and occupies us for asizeable part of this section. For the usual simple random walk on the square lattice, the definitionof the potential kernel (see e.g. [28]) relies on precise estimates for the random walk comingfrom the exact computation of the Fourier transform of the law of random walk. Such an exact11omputation is clearly impossible here, since the effective walks cannot be viewed as a sum of i.i.d.random variables. We overcome this obstacle by developing a general method (which we think maybe of independent interest) to define the potential kernel of a recurrent random walk and proveconvergence of Green function differences towards it. The main idea is to proceed by coupling . Wenote that a similar idea has also been recently advocated by Popov (see Section 3.2 of [32]); but theapproach in [32] also takes advantage of some properties and symmetries which are not availablehere. Instead, our starting point is the robust estimate of Nash (see e.g. [2]) characterising the heatkernel decay. With our approach, only a weak (polynomial of any order) bound for the probabilityof non coupling suffices to show the existence of the potential kernel. An immediate byproduct ofour quantitative approach (which is crucial for us) is the proof of the desired convergence of Greenfunction differences towards the differences of the potential kernel, obtained in Proposition 3.7. Section 4
In Section 4 we move on to describe the scaling limit (now in the limit of fine meshsize) for the potential kernel of the effective walks Z even and Z odd . A key idea is to say that whensuch a walk hits the real line, it will hit it many times and therefore has a probability roughly 1/2 toend up at a vertex with even (resp. odd) horizontal coordinate once it is reasonably far away fromthe real line. This idea eventually leads us to asymptotic formulae for the potential kernel whichdepends on the parity of the horizontal coordinate of a point (see Theorem 4.1). To achieve this,we introduce an intermediary process which we call coloured random walk , which is a randomwalk on (twice) the usual square lattice, but which can also carry a colour (representing, roughlyspeaking, the actual parity of the effective walk). This colour may change only when the walk hitsthe real line, and then does so with a fixed probability p . The proof of Theorem 4.1 relies on firstcomparing our effective walk to the coloured random walk (Proposition 4.3) and then from thecoloured walk to half of the potential kernel of the usual simple random walk (Proposition 4.4). Section 5
We are now finally in a position to start the proof of Theorem 1.3. From Theorem 4.1we obtain a scaling limit for the inverse Kasteleyn matrix of the (infinite volume) free boundarydimer model. After recalling Kasteleyn theory in the nonbipartite setting, we then compute thescaling limit of the pointwise moments of height function differences on H in Section 5.4. Theargument is based on Kenyon’s original computation [21] but with substantial modifications comingfrom the fact that we use Pfaffian formulas instead of the determinantal formulas for bipartitegraphs. This leads to different expressions which fortunately simplify asymptotically (for reasonsthat are related but distinct from those in [21]). This leads to the formula in Proposition 5.6,which is an asymptotic expression for the limiting joint moments of pointwise height differences,with an explicit quantification of the validity of the limiting formula (needed in the following). Tofinish the proof of the result, we transfer this result in Section 5.5 into one about the scaling limitof the height function as a random distribution. This is essentially obtained by integrating theresult of Proposition 5.6, but extra arguments are needed for the case when some of the variablesof integration are close to one another. Here we list several open problems that arise naturally from our considerations. As will becomeapparent later on, the true random walk representation for the inverse Kasteleyn matrix is a resultof an exact computation carried out in the infinite volume limit (for a one-dimensional walk on Z ).12he fact that the transition rates for the effective walk are positive and sum up to one is thereforemysterious. This leads us to the first question. Problem 1.6.
Find a conceptual argument for the true random walk representation of the inverseKasteleyn matrix for the free boundary dimer model.
To talk about conformal invariance of the scaling limit, one clearly needs to consider more thanjust one domain. The first step in that direction would be answering the following question.
Problem 1.7.
Consider square lattice approximations (e.g. with piecewise Temperleyan boundaryconditions) of a bounded domain Ω contained in the upper half plane whose boundary has a nontrivialintersection with the real line (containing at least one interval). Show that the scaling limit of thecentered free boundary dimer model height function is the Gaussian free field with zero (Dirichlet)boundary conditions on ∂ Ω \ R and Neuman boundary conditions on ∂ Ω ∩ R . Of course considering boundary conditions where monomers are allowed on a non-straight (butstill smooth) part of the boundary is also very interesting but at this point the influence of thegeometry of the boundary on the (conjectural) limiting Gaussian free field is not clear to us. It isreasonable to expect that solving Problem 1.6 would give insight into this matter.In the same vein, the solution of the next problem (which follows the already classical path ofproofs of conformal invariance of lattice models [21, 37]) will probably yield solutions to the twoquestions already raised. Problem 1.8.
Define and analyse the correct discrete and continuum boundary value problem forthe inverse Kasteleyn matrix seen as a discretely holomorphic function.
Lastly but not least interestingly, the dimer model on special families of bipartite planar graphsis famously related, through various measure preserving maps, to other classical models of statisticalmechanics like spanning trees (see e.g. [26]), the double Ising model [10, 8] or the closely relateddouble random current model [12]. This indicates the following direction of study.
Problem 1.9.
Analyse the boundary conditions in these classical lattice models induced by thepresence of monomers in their dimer model representations.
Acknowledgements.
N.B.’s work was supported by: EPSRC grant EP/L018896/1, University ofVienna start-up grant, and FWF grant P33083 on “Scaling limits in random conformal geometry”.The authors are also grateful for an invitation to visit the D´epartement de Math´ematiques etApplications at Ecole Normale Sup´erieure in April 2018, where part of this work took place, andduring which N.B. was an invited professor. The hospitality of that department, and the stimulatingatmosphere of the discussions (including with D. Chelkak, who helped us with the computation(2.10)) are gratefully acknowledged.
In [15] a representation of the free boundary dimer model was given in terms of a dimer model onan augmented (nonbipartite) graph where a row of triangles is appended to ∂ free G (see Figure 2). We thank Rick Kenyon for raising this question igure 2: An augmented non-bipartite graph G and its Kasteleyn orientation. The graphis constructed from a piece of the square lattice G with ∂ m G = V by adding the bottom rowof triangles. In this case G has two black monomer-corners, two black dimer-corners, andtwo white dimer-corners. The additional row of triangles (here T k with k = 9 ) simulatesthe presence of monomers in the free boundary dimer model by means of a standard dimermodel. This particular choice of k forces and even number of monomers as k − (cid:98) k/ (cid:99) + 1 is even. This is expressed as a measure-preserving bijection between D ( G ) and MD ( G ) with a proper choice of weights We first slightly generalize the result contained in [15] in order to account for the case when ∂ free G is not the whole boundary of G . To this end, for k >
0, let T (cid:48) k be the graph composed of k triangles glued together in a manner like the bottom part of the graph in Figure 2 where we assumethat the left-most triangle is (cid:53) for k even and (cid:52) for k odd. Let T k be T (cid:48) k with all top horizontaledges removed, and let T be a single edge interpreted as one non-horizontal side of a triangle. Let ∂ T k be the upper row of vertices in T k , and note that | ∂ T k | = (cid:98) k/ (cid:99) + 1. Lemma 2.1.
For every k ≥ and every choice of W ⊆ ∂ T k with | W | = | ∂ T k | = (cid:98) k/ (cid:99) + 1 (mod 2) ,there exists exactly one dimer cover of T k \ W , where with a slight abuse of notation, T k \ W is T k with the vertices in W and all adjacent edges removed.Proof. The statement for k = 0 , , , , { v , v } be the firsttwo vertices of ∂ T k . We now consider cases for T k +5 : case I. | W ∩ { v , v }| ∈ { , } . Then, there is exactly one choice of dimers on the first two triangleswhich corresponds to either situation. One hence reduces the problem to the case of T k +1 . case II. | W ∩ { v , v }| = 1. Then, there is exactly one choice of dimers on the first triangle for k even and on the first three triangles for k odd which correspond to either situation. One hencereduces the problem to the case T k +2 for k even and T k for k odd.This lemma implies that by gluing the graph T k (with a proper choice of k ) to G so that ∂ free G and ∂ T k match, and by considering the dimer model on this extended graph G (with dimer weight z v for the two non-horizontal edges of the triangle incident on a vertex v ∈ ∂ free G ) one can simulatethe free boundary dimer model on G with monomers on ∂ free G and with monomer weight z v for14 ∈ ∂ free G . Indeed, it is enough to interpret the set W from the lemma above as the set of verticesof G that belong to a dimer, and T k \ W as the set of monomers.In other words, there is a measure preserving bijection between D ( G ) and MD ( G ). Note thatif G has a dimer cover (which we assume in this article), then one has to take k − (cid:98) k/ (cid:99) + 1 even,as there has to be an equal number of white and black monomers. A Kasteleyn orientation of a planar graph is an assignment of orientations to its edges such thatfor each face of the graph, as we traverse the edges surrounding this face one by one in a counter-clockwise direction, we encounter an odd number of edges in the opposite direction (see e.g. [39]).For graphs as defined in Section 1.2 we make the following choice (see Figure 2): every vertical lineis oriented downwards (including the non-horizontal sides of triangular faces at the bottom). Theorientation of horizontal edges alternates: in odd rows (starting at row − K ( x, y ) is taken to be the signed,weighted adjacency matrix: that is, ˜ K ( x, y ) = ± x ∼ y w ( x,y ) where the sign is + if and only if theedge is oriented from x to y , and the weight w ( x,y ) is 1 for horizontal and vertical edges (includingon V − ), and z for the nonhorizontal sides of triangular faces. However, it will be useful to performa change of gauge, as follows. For every k ≥ x ∈ V k , we multiply by i theweight of every edge adjacent to x . In particular, every horizontal edge in V k with k even receivesa factor of i twice coming from both of its endpoints, whereas each vertical edge receives a factor of i exactly once. We define the gauge-changed Kasteleyn matrix K ( x, y ) to be the resulting matrix.Formally, K ( x, y ) = ˜ K ( x, y ) i x ∈ V even +1 y ∈ V even . (2.1)For instance, if x ∈ V is not on the boundary, then x has five neighbours. Starting from thevertical edge and moving counterclockwise, the weights K ( x, y ) are given by − i, − , iz, iz, Let D = K ∗ K . In this section we explain the key idea involved in computing D − , and thusultimately K − . The matrix D already played a crucial role in [21], where Kenyon observed thatit reduced to the Laplacian on the four types of sublattices of the square grid.We will follow a similar approach but, as we will see, the immediate interpretation of D as aLaplacian breaks down in the rows V − , V and V . Nevertheless, admitting the formal sum-over-all-paths identity (2.2), we will be able to make a guess on the structure of D − . This will ultimatelylead us to the identification of D − as the Green’s function of a certain effective random walk (or,in fact, a pair of effective random walks) which appear in the statement of Theorem 1.1.Therefore, the purpose of this section is mostly to explain the heuristic principles guiding theproof, and to introduce the relevant objects and the notation. Once this framework is defined wewill start with the actual proof in Section 2.4. We will complete the rigorous computation of D − (and therefore the proof of Theorem 1.1) in Section 2.6.We now fix a finite arbitrary graph G that satisfies the conditions of Section 1.2. We firstcompute D explicitly. Note that if x ∈ V k with k ≥
2, the entries of D are computed in a way15 igure 3: Three types of vertices x where the transition weights of D = K ∗ K are signed.We marked the values of D ( x, y ) next to an arrow from x to y (recall that the transitionweight for the walk is − D ( x, y ) ). We make two observations that are crucial for our analysis:First, in the rightmost case (when x ∈ V ), the absolute values of the transition weightssum up to the diagonal term. Moreover, the transition weight is negative if and only if thesize of the step is odd (more precisely equal to one). This simplifies the analysis of thewalk Z even compared to Z odd , as it immediately becomes a true random walk reflected onthe real line ( V ), with a global sign factor appearing in the Greens’ function which countsthe parity of the difference in the x -coordinates of the startpoint and the endpoint of thewalk. This results in the form of the second line of (1.3) . A similar observation can bemade for the central picture (when x ∈ V − ) if one ignores the transition weights that leadback to V . This is the basis for the construction of Section 2.3 and the definition of theauxiliary random walk on Z from (2.7) . Our approach is to “forget” what the walk doeswhen it stays in V − and resum over all trajectories contained in V − and with the sameendpoints in V . The mentioned observation is then key to analyse the effective transitionweight between two vertices in V by means of the potential kernel of the auxiliary randomwalk on V − D ( x, x ) = K ∗ K ( x, x ) = (cid:88) y ∼ x K ∗ ( x, y ) K ( y, x ) = (cid:88) y ∼ x | K ( y, x ) | = deg( x ) . Moreover, the off-diagonal terms are nonzero if and only if y is at distance two from x , but notdiagonally (the diagonal cancellation is a consequence of the Kasteleyn orientation), i.e., if y is aneighbour of x on one of the sublattices 2 Z × Z , (2 Z + 1) × (2 Z + 1), 2 Z × (2 Z + 1) or (2 Z + 1) × Z in which case one can check as above that D ( x, y ) = −
1. Therefore away from the boundary ∂ free G ,in the same way as in [21], D is the Laplace operator associated to a simple random walk on eachof the sublattices, up to a multiplicative constant. The Temperleyan boundary conditions are thennaturally associated with certain boundary conditions for D on these sublattices.Complications for such an interpretation arise when x ∈ V − ∪ V ∪ V . See Figure 3 for thenonzero entries of D in these cases. Notice that now it is not necessarily true that the diagonal term D ( x, x ) is (up to a sign) the same as the sum of the off-diagonal entries on the row correspondingto x , or in other words, the transition weights d x,y in (2.3) do not sum up to 1. Moreover, someof them are negative. While this seems like a very serious obstacle for describing the behaviour ofthe operator D − in the scaling limit, we nevertheless show in the next section how we can recoveran effective random walk for which D really is the Laplacian.More precisely, D − can be formally viewed as a sum of weights of paths of all possible lengths,where the weight of a path is the product of (signed) transition weights of individual jumps. Thatis, formally, D − ( u, v ) = 1 D ( v, v ) (cid:88) π : u → v w ( π ) , (2.2)where w ( π ) = (cid:89) ( x,y ) ∈ π d x,y with d x,y = − D ( x, y ) D ( x, x ) . (2.3)For x in the bulk, d x,y = 1 / y which is neighbour of x on the sublattice of twice largermesh size containing x , and is 0 otherwise, which is the same as the transition probability of asimple random walk on that sublattice.Let us now point out that the transition weights between an even row and an odd row arealways 0. Compared to the odd rows, the construction for even rows is much simpler. As seen inFigure 3, for x ∈ V which is not an extremity of the row V or at distance one from the extremities, D ( x, x ) is in fact equal to the sum of | D ( x, y ) | for all y (cid:54) = x . We can therefore view | d x,y | for x ∈ V as the transition weights of a random walk that is reflected on row V (and can make jumps of sizeone and two on that row). When x is one of the extremities of V or is at distance one from theextremities, the values of D ( x, x ) and D ( x, y ) allow us to interpret it as a killing or a reflection ofthe random walk at the boundary (see Section 2.7 and in Figure 4 for more details). When we takeinto account the signs of d ( x, y ) in (2.3), this gives rise to a global sign factor which depends onlyon u and v can be seen in the second line of (1.3).The rest of this section is devoted to the more complicated task of giving a random walkrepresentation to D − restricted to the vertices in odd rows V odd . We now describe the main idea.We will manage to give a meaning to the right hand side of (2.2) by fixing a specific order ofsummation. We will later on prove that this definition really does give us the inverse of D , and we17ill also find a random walk interpretation to this definition. We emphasise this because the signsare not constant, and hence the order of summation is a priori relevant to the value of the sum.Essentially we will compute the sum in (2.2) by ignoring the details of what the path does when itvisits V − . That is, we will identify two paths if they enter V − at the same place in V and leave V − at the same places in V for each visit to V − , and we will be able to estimate contributions to(2.2) coming from each such equivalence class.An important observation (see Figure 3) here is that for each x ∈ V − which is not the extremityof V − or at distance one from the extremities, the diagonal term D ( x, x ) is equal to the sum of | D ( x, y ) | for all y ∈ V − not equal to x . Note that D ( x, y ) is non zero for y = x ± y = x ± V − as the weight of a random walkwith steps ± ± V − . For x ∈ V − which is equal to the extremity of V − or at distance onefrom the extremities, the values of D ( x, x ) and D ( x, y ) again allow us to interpret it as a killing ora reflection of the random walk at the boundary (see Section 2.7 and in Figure 4 for more details).One can therefore associate a Green’s function g ( · , · ) with the random walk on V − with transitionprobabilities p x,y = | d x,y | . (2.4)For x ∈ V = V ( G ) = V ( G ), let x − and x + be the left and right vertex in V − = V − ( G ) twosteps away from x . We fix u, v ∈ V and let u • ∈ { u − , u + } and v • ∈ { v − , v + } . We define P u • ,v • to be the set of paths from u • to v • which are contained in V − . Observe that if π ∈ P u • ,v • , then π makes jumps of size ± ±
2, and that each odd jump contributes a negative weight to (2.2)whereas each even jump contributes a positive weight. Since π goes from u • to v • the parity ofthe number of even and odd jumps is fixed and depends only on the distance between u • and v • in V − . Hence w ( π ) = ( − (cid:60) ( v • − u • ) (cid:89) ( x,y ) ∈ π | d x,y | , where d x,y is defined in (2.3).Going further: if P u • ,v is the set of paths going from u • to v and staying in V − (except for thelast step, which must be from v ± to v ), then (cid:88) π ∈P u • ,v w ( π ) = ( − (cid:60) ( v + − u • ) ( g ( u • , v + ) − g ( u • , v − )) z z (2.5)where the last term accounts for the weight − D ( v ± , v ) /D ( v ± , v ± ) of the last step from V − to V .Finally, let P u,v be the set of paths from u to v which stay in V − except for the first and last step(which necessarily are from V to V − and vice versa). Using (2.5) we have (cid:88) π ∈P u,v w ( π ) = z z ( − (cid:60) ( v − u ) ( g ( u + , v + ) − g ( u + , v − ) − g ( u − , v + ) + g ( u − , v − )) =: q u,v , (2.6)where the additional term z compared to (2.5) accounts for the weight − D ( u, u ± ) /D ( u, u ) of thefirst step from V to V − . The factor in the definition of q u,v is included for later convenience.Recall that our intention is to interpret the quantities q u,v as transition probabilities betweenvertices in V . In particular we would wish q u,v to be positive and sum up to (something less than)one (since the other three transition weights induced by D from a vertex in the bulk of V to V igure 4: A graph G (there are additional triangles appended on each side of G )with different types of vertices and transitions marked. The transitions weights given by D ( · , · ) = D N ( · , · ) are − for long arrows and z for short arrows. The diagonal terms are D ( x , x ) = 1 + z , D ( y , y ) = 1 + 2 z , D ( x , x ) = D ( y , y ) = 2 + 2 z , D ( z , z ) = 3 , D ( z , z ) = 4 . The black vertex z has Neumann boundary conditions for the associatedwalk, since the total weight of outgoing transitions is also . The white vertex z hasDirichlet boundary conditions since the total outgoing weight is < V are equal to 3 / V − ( G ) with its particular boundaryconditions is not easy). However, a nice solution to this problem, which effectively gets rid ofboundary conditions, is the following construction. We note that this construction is the reasonfor the appearance of the special monomer weight z (cid:48) at the monomer-corners in the statement ofour results. To overcome the issue raised above, we introduce an intermediate limiting procedure in our model.To this end, let G N be the graph G to which we append 2 N triangles on either side of G along V − and V (see Figure 4 for an example). We assign weights 1 to every edge except if it belongs to atriangle and is not horizontal, in which case we assign weight z . Since we assumed that G has adimer cover, it is easy to see that G N also has at least one dimer cover. We can hence talk aboutthe dimer model on G N with the specified weights.Using Lemma 2.1, we can also rephrase this dimer model as a free boundary dimer model on G to which we add a segment of N edges to the left and right of ∂ free G . The first observation is thatthe monomer-dimer configuration on G N restricted to G , in the limit N → ∞ has the law of the freeboundary dimer model with weight z (cid:48) from (1.2) at the monomer-corners. This is an immediateconsequence of the following elementary lemma.19 emma 2.2. Let Z N be the partition function of the monomer dimer model on a segment of Z oflength N with monomer weight z and edge weight . Then, as N → ∞ , Z N +1 Z N → z (cid:48) , where z (cid:48) = z (cid:114) z . Proof.
It is enough to solve the recursion Z N +1 = z Z N + Z N − to get that Z N = (cid:16) − z β (cid:17)(cid:16) z − β (cid:17) N + (cid:16)
12 + z β (cid:17)(cid:16) z β (cid:17) N , where β = (cid:113) z .Let K N be the Kasteleyn matrix of G N and let D N = ( K N ) ∗ K N . The statement above andKasteleyn theory imply that the inverse Kasteleyn matrix K − N restricted to G converges as N → ∞ to the inverse Kasteleyn matrix ( K (cid:48) ) − for the free boundary dimer model on G with monomerweights z (cid:48) at the monomer-corners. Z It will be convenient to consider a random walk on V − ( Z ) (cid:39) Z with transition probabilities givenby p ∞ x,x ± = z z =: 1 / − p, p ∞ x,x ± = 12 + 2 z =: p. (2.7)In other words, this is the infinite volume version of the walk from (2.4). Now, while the Green’sfunction of this walk is infinite since the walk is recurrent, its differences makes sense in the formof the potential kernel (see [28], Section 4.4.3) given by α k = ∞ (cid:88) n =0 ( p n (0) − p n ( k )) = lim N →∞ (cid:16) N (cid:88) n =0 p n (0) − N (cid:88) n =0 p n ( k ) (cid:17) , (2.8)where p n ( k ) = (cid:80) ni =0 P ( X i = k ) with X being the random walk with jump distribution (2.7).Using the potential kernel, for u, v ∈ V ( Z ) (cid:39) Z , we can now define the infinite volume version ofthe transition weight q u,v from (2.6) by q ∞ u,v = z z ( − k +1 (2 α k − α k +1 − α k − ) , (2.9)where k = (cid:60) ( v − u ). Note that the sign is opposite to that in (2.6). To is due to the − p n ( k ) termin the definition of the potential kernel.The next result is one of the crucial observations in this work. Lemma 2.3 (Effective transition probabilities) . For all z > , and any pair of vertices u, v ∈ V ( Z ) , we have q ∞ u,v ≥ and (cid:88) v ∈ V ( Z ) q ∞ u,v = 1 . Moreover, q ∞ u,v → exponentially fast as | u − v | → ∞ . q ∞ u,v . Together they imply that we can think of q ∞ u,v as the step distribution of some effectiverandom walk on V . Later in Proposition 2.8, we will prove that q ∞ u,v is the limit of q u,v from (2.6)on G N when N → ∞ . As mentioned before, the proof of Lemma 2.3 is just an exact computation ofthe potential kernel α , and a conceptual understanding of why it is true is the subject of Problem 1.6. Proof of Lemma 2.3.
The proof is based on an exact formula for the potential kernel α of the walkon Z defined by (2.7). To start with, by Theorem 4.4.8 from [28] we know that α k = | k | σ + A + O ( e − β | k | )for some constants β > A ∈ R , and where σ = 1 + 6 p is the variance of the walk with p as in (2.7). Moreover, α is harmonic (except at k = 0) with respect to the Laplacian of the walk(2.7). This implies that the O ( e − β | k | ) term is of the form (cid:80) i B i γ | k | i for some constants B i and γ i satisfying | γ i | < / − p )( γ i + γ − i ) + p ( γ i + γ − i ) . We solve and get only one such γ = γ i equal to γ = (cid:114)(cid:16)
12 + 14 p (cid:17) − − − p ∈ ( − ,
0) (2.10)(the second solution is γ = 1 and does not satisfy | γ | < α k = | k | σ + A + Bγ | k | for some constants A and B . Using that α = 0 by definition, we get A = − B and hence α k = | k | p − B + Bγ | k | . (2.11)We still need to compute B which is equivalent to computing α . Let X be the walk with transitionprobabilities (2.7). Let τ = inf { n > X n > } , and q = P ( X τ = 1) and 1 − q = P ( X τ = 2) . Then, by considering the possible four different first steps (+1 , − , +2 , −
2) of X and using trans-lation invariance and the strong Markov property, we get that q = ( − p ) + ( − p )((1 − q ) + q ) + p ( q (1 − q ) + q + (1 − q ) q ) , which simplifies to pq + (cid:0) − p (cid:1) ( q −
1) = 0 . (2.12)One can check that q = γ + 1. Moreover, using the symmetry of jumps of X and the Markovproperty for the walk, we get the equation (again considering the first four steps in the same order) α = 1 + ( − p )( − α ) + ( − p )[ qα + (1 − q )( − α )]+ p [(1 − q ) α + q ( − α )] + p [( q + (1 − q )) α + q (1 − q )( − α )] . (2.13)21o justify (2.13), one starts from the definition of α in (2.8) as the limit as N → ∞ of the expecteddifference of number of visits by time N to the sites 0 and 1. We first apply the simple Markovproperty at the first step, and depending on the outcome of the first step, apply the strong Markovproperty at the next time τ (after time 1) that the walk returns to 0 or 1, taking care of thecontribution coming from the event { τ > N } . We then let N → ∞ . There is no problem in doingso, first because the sequence { α n } n ≥ is bounded, which lets us use the dominated convergencetheorem, and second because the contribution coming from the event { τ > N } to the differencebetween the number of visits at 0 and 1 by time N is bounded by 1. Details are left to the reader.Together with (2.12), (2.13) gives α = 1(1 + 2 p ( − q ))(2 − q ) = 1(1 + 2 pγ )(1 − γ ) . (2.14)and hence from (2.11) we obtain B = 4 p ( γ − p + 1)(2 pγ + 1) ≤ . (2.15)We can now define q k = ( − k +1 z z ∆ α k = ( − k +1 ( − p )∆ α k , where ∆ α k = 2 α k − α k +1 − α k − is the Laplacian of simple random walk. Then q k = q ∞ u,v whenever | u − v | = k . Using (2.11), we have( − k +1 ∆ α k = (cid:40) − B | γ | | k | (2 − γ − γ − ) ≥ k (cid:54) = 0 , p − B (1 − γ ) ≥ k = 0 . (2.16)and hence the total transition weight is (cid:88) k ∈ Z q | k | = ( − p ) (cid:16) − B (2 − γ − γ − ) − γ γ − B (1 − γ ) + 21 + 6 p (cid:17) = ( − p ) (cid:16) − B − γ γ + 21 + 6 p (cid:17) . (2.17)Using (2.10) and (2.15), it can be checked that the last expression is equal to one for all 0 < p < / z > q k is clear from (2.16). D − Here we finally establish a rigorous version of (2.2) using the ingredients from the previous sections.Recall that K N is the Kasteleyn matrix of the graph G N and D N = ( K N ) ∗ K N . We will be mostlyinterested in the restriction of D − N to the vertices of G . Observe that D N can be written as ablock-diagonal matrix if we consider vertices respectively in the odd or even rows. Hence to invert D N it will suffice to invert each of these blocks separately. We call D odd N (resp. D even N ) the matrix D N restricted to V odd ( G N ) ∪ V − ( G N ) (resp. V even ( G N )).22e first focus on the odd case (the even case is much easier as explained before), and for nowwe will write D N for D odd N . The key idea will be to use the Schur complement formula . To bemore precise, we observe that D N has the block structure D N = (cid:18) A BB T C (cid:19) , where A is indexed by the special row V − , and C is indexed by all the other rows V odd . Hence B and B T can be thought of as a “transition matrices” between V − and V odd . Note that thesematrices depend on N but we don’t write this explicitly to lighten the notation. We define theSchur complement of A to be the matrix D N /A := C − B T A − B. (2.18)With this definition, the restriction of D − N to V odd is simply given by D − N | V odd = ( D N /A ) − . (2.19)We now outline how we proceed. • We first write A − in terms of the Green’s function for the random walk on V − ( G N ) withtransition probabilities as in (2.4). • This gives us a formula for the Schur complement D N /A via (2.18). We then use that for N sufficiently large, this Schur complement can be viewed as a (genuine) Laplacian for a randomwalk. The proof of this statement is postponed until Section 2.7. • As a consequence of (2.19), this gives a formula for the inverse of D N as a Green’s functionof a genuine random walk. • Finally, as the number N of triangles appended to G tends to infinity, on the one hand, theabove analysis shows that the inverse Kasteleyn matrix (restricted to V odd ) can be writtenin terms of the Green’s function of a random walk with jumps along the boundary. On theother hand as mentioned before, the free boundary dimer model becomes equivalent to thesame model on G with modified monomer weights z (cid:48) as in (1.2) at the monomer-corners. • The results of this section are summarised below as Corollary 2.7.We start with the computation of A − . To this end let g N ( u, v ) = (cid:88) γ : u → vγ ⊆ V − ( G N ) (cid:89) e =( x,y ) ∈ γ p Nx,y be the Green’s function of the random walk on V − ( G N ) with transition probabilities p Nx,y definedfor G N as in (2.4). Note that this is well defined since the walk is killed on both the left and rightextremities of V − ( G N ) (see Figure 4 for the exact form of transition probabilities at the extremities x , x ). Lemma 2.4.
Let u, v ∈ V − = V − ( G N ) . Then A − ( u, v ) = 1 A ( v, v ) ( − (cid:60) ( u − v ) g N ( u, v ) . roof. This follows from the fact that | A | is the Laplacian for the random walk described above,and moreover (as mentioned before) the sign of the transition weights induced by A is negative ifthe step is of size ± ± D N and the Kasteleyn matrix.We now explain how this yields an interpretation for the Schur complement D N /A as a (genuine)Laplacian for a random walk in the bulk V odd ( G ) with jumps along the boundary V ( G ). For u, v ∈ V = V ( G N ) = V ( G ), we define q Nu,v = ( B T A − B )( u, v ) . (2.20)Recalling that D N ( v, v ) = A ( v, v ) = 2 + 2 z for v ∈ V − ( G ) and N ≥
1, a straightforwardcomputation using Lemma 2.4 shows that q Nu,v = z z ( − (cid:60) ( v − u ) (cid:0) ( g N ( u + , v + ) − g N ( u + , v − )) − ( g N ( u − , v + ) − g N ( u − , v − )) (cid:1) , (2.21)where again u ± , v ± are the left and right vertices in V − at distance two from u and v respectively.Recall the definition of q ∞ u,v from (2.9), and let q Nu,v be the transition weights defined by (2.6) forthe graph G N . The next results, whose proof will be given in Proposition 2.8 of the next section,implies that for N large enough, q Nu,v become actual transition probabilities.
Lemma 2.5.
Let u, v ∈ V = V ( G N ) = V ( G ) . Then q Nu,v → q ∞ u,v as N → ∞ pointwise. Inparticular, for N sufficiently large, q Nu,v > and (cid:88) v ∈ V q Nu,v < . (2.22) Proof.
The convergence follows immediately from (2.21) and the convergence result in Proposi-tion 2.8. Condition (2.22) is a consequence of Lemma 2.3.Note that the second inequality is strict since the sum is taken over V ( G ) (cid:40) V ( Z ∩ H ). Nowlet N be sufficiently large that (2.22) holds true, and consider a transition matrix between verticesin u, v ∈ V odd given by R N ( u, v ) = I ( u, v ) − C ( u, u ) (cid:0) C ( u, v ) − q Nu,v { u,v ∈ V } (cid:1) , (2.23)where I is the identity. Note that R N ( u, v ) ≥ (cid:88) v R N ( u, v ) ≤ R N is a substochastic matrix. Indeed, this follows from the definition of C = D N | V odd and(2.22). Also note that this holds even when u is one of the two corners, i.e., the left and rightextremities of V ( G ). In other words, we may add a cemetery absorbing point ∂ to the state spaceand declare R N ( x, ∂ ) = 1 − (cid:80) y R N ( x, y ) ≥
0. This turns R N into the transition matrix of a properrandom walk on the augmented state space V odd ∪ { ∂ } , which is absorbed at ∂ . We let Z N bethe random walk on V odd ∪ { ∂ } whose transition probabilities are given by R N ( x, y ). We call thisrandom walk the effective (odd) bulk random walk .24he interest of introducing the transition matrix R N of this effective bulk random walk is thatits associated Laplacian gives us the Schur complement D N /A : that is, for u, v ∈ V odd , we have( D N /A )( u, v ) = C ( u, u )( I ( u, v ) − R N ( u, v )) , (2.24)which follows from the definition of the Schur complement (2.18), (2.20) and the definition of R N .From this formula and the Schur complement formula (2.19), it is immediate to deduce thefollowing proposition, which says that the inverse of D odd N = D N (i.e., the inverse of ( K N ) ∗ K N restricted to bulk odd vertices) is given by the Green’s function of the effective bulk random walk.Recall that C ( v, v ) = D N ( v, v ). Proposition 2.6.
Let u, v ∈ V odd ( G ) . Then for all N sufficiently, large we have ( D odd N ) − ( u, v ) = G N odd ( u, v ) , where G N odd is the (normalised) Green’s function associated to R N , i.e., G N odd ( u, v ) = 1 D N ( v, v ) E u (cid:16) ∞ (cid:88) t =0 { Z Nt = v } (cid:17) . (2.25)We now address the even case, and write D N = D even N . We introduce a “sign” diagonal matrix S ( x, x ) = ( − (cid:60) ( x ) . Then, the matrix ˜ D N := S − D N S is positive on the diagonal and negative off-diagonal. Moreover, we have˜ D − N ( u, v ) = G N even ( u, v )where G N even ( u, v ) = 1 D N ( v, v ) E u (cid:16) ∞ (cid:88) t =0 { ˜ Z t = v } (cid:17) , where ˜ Z is a random walk on V even ( G N ) with the transition probabilities:˜ R N ( x, y ) = | D N ( x, y ) | D N ( x, x ) x (cid:54) = y . (2.26)The fact that the even case is much simpler than the odd one can be seen here since ˜ R N ( x, y )is actually a transition matrix of a true random walk on V even ( G N ). Indeed, (see Figure 4 for anillustration) • in the bulk of V even ( G N ) \ V ( G N ), the walk jumps by ± / • On the boundary ∂G ∩ V even ( G N ), the walk makes jumps according to the local boundaryconditions which are either Dirichlet or Neumann, • On V ( G N ) ∩ V ( G ) it may jump horizontally by ± z / (3 + 2 z ) or by ± / (3 + 2 z ), and vertically by +2 also with probability 1 / (3 + 2 z ). This isconsistent with the fact that D ( x, x ) = 3 + 2 z for x ∈ V ( G ),25 On V ( G N ) \ V ( G ) except at its endpoints, it may jump horizontally by ± z / (2 + 2 z ) or by ± / (2 + 2 z ). This is consistent with the fact that D ( x, x ) = 2 + 2 z for x ∈ V ( G N ) \ V ( G ), • At the the endpoints of V ( G N ), it has transition probabilities as the vertices y , y in Figure 4.All in all we obtain that D − N ( u, v ) = ( − (cid:60) ( v − u ) G N even ( v, u ) . (2.27)Now a moment of thought shows that there is no problem in letting N → ∞ in this expression.This is because the random walk associated with R N is absorbed on some portion of the boundary ∂ G \ ∂ free G , as described in Section 1.2.Hence we deduce that lim N →∞ D − N ( u, v ) = ( − (cid:60) ( u − v ) G even ( u, v ) . (2.28)where G even ( u, v ) is the Green’s function on G ∞ (that is, the graph G to which infinitely manytriangles have been added on either side of V ) associated with the random walk on G ∞ whosetransition probabilities are given by (2.26).At the same time, when N → ∞ , the free boundary dimer model on G N , restricted to G ,becomes equivalent to a free boundary dimer model on G where the monomer weights on theextreme vertices (corners) of V have been given the weight z (cid:48) > Corollary 2.7.
Consider the free boundary dimer model on G where the monomer weight z > on V ( G ) except at its monomer-corners where the monomer weight is z (cid:48) as in (1.2) . Let K be theassociated Kasteleyn matrix, and D = K ∗ K . Then for all u, v ∈ V ( G ) , we have D − ( u, v ) = G odd ( u, v ) if u, v ∈ V odd ( G ) , ( − (cid:60) ( v − u ) G even ( u, v ) if u, v ∈ V even ( G ) , otherwise.where G odd , G even are the normalised Green’s functions associated with the effective (odd and even)bulk random walks described in (2.23) and (2.26) respectively, normalised by D ( v, v ) .In particular, the inverse Kasteleyn matrix is given by K − = D − K ∗ . This result implies Theorem 1.1 with the walks Z even and Z odd explicitly defined as above. In this section we prove the convergence statement from Lemma 2.5.To this end, let N ≥ X n , n ≥
0) on [ − N − , . . . , N + 1] (wenote that the role of N is slightly different here compared to the definition of q Nu,v , as for simplicityof notation we do not account for the length of V ( G )) where the transition probabilities ˜ p u,v in[ − N + 1 , N −
1] coincide with those of the random walk X from (2.7). At ± N and ± ( N + 1) thewalk has the following boundary conditions (see the vertices x , x in Figure 4 ): • the chain is absorbed at ± ( N + 1) 26 at ± N the transitions are those of X but reflected (e.g., at u = N , the only possible transitionsare to v = N − v = N − p u,v = ˜ p | u | , | v | if sgn( u ) sgn( v ) = 1.Let ˜ g N ( u, v ) = E u ( ˜ L T ∂ ( v )) , where ˜ L t ( v ) = (cid:80) ts =1 { ˜ X s = v } is the local time of ˜ X at v by time t , and T ∂ is the killing time of ˜ X (first hitting time of ± ( N + 1)). We will check here that Green’s function differences converge tothe potential kernel, in the following sense: Proposition 2.8. As N → ∞ , ˜ g N ( u, v ) − ˜ g N ( u (cid:48) , v ) → − ( α ( u, v ) − α ( u (cid:48) , v )) (2.29) where α ( u, v ) is the potential kernel from (2.8) (that is, α ( u, v ) = α | u − v | ).Proof. To begin, recall that if ( X n , n ≥
0) is the walk on Z with transitions given by p u,v in(2.7), and if v is fixed, then α ( x, v ) is harmonic (for X ) in x except at x = v . More precisely, M n = α ( X n , v ) − L n ( v ) is a martingale.Suppose first that v = 0, and consider the walk ˜ X instead of X . We claim that by symmetry,˜ M n = α ( ˜ X n , − ˜ L n (0) + ˜ A n ; 0 ≤ n ≤ T ∂ (2.30)is a martingale, where for some constant c ∈ R ,˜ A n = c ( ˜ L n ( − N ) + ˜ L n ( N )) = cL | ˜ X | n ( N )is the local time of ˜ X at the reflecting part of the boundary, or equivalently the local time of theabsolute value | ˜ X | at N by time n ; note that by assumption | ˜ X | is itself a Markov chain. To bemore precise, c can be computed as c = E N ( α ( ˜ X , − α ( N, . Applying to (2.30) the optional stopping theorem at T ∂ (which is allowed since this only involves afinite number of possible values for ˜ X n ), and since α ( − N − ,
0) = α ( N +1 ,
0) = α N +1 by symmetryof α , we get ˜ E u ( ˜ M T ∂ ) = α N +1 − ˜ g N ( u,
0) + ˜ E u ( ˜ A T ∂ ) . On the other hand, starting from u , ˜ M = α ( u, α ( u,
0) = α N +1 − ˜ g N ( u,
0) + c ˜ E | u | ( L | ˜ X | T ∂ ( N )) . (2.31)Therefore, applying (2.31) also at a different vertex u (cid:48) and taking the difference, we get α ( u, − α ( u (cid:48) ,
0) = − (˜ g N ( u, − ˜ g N ( u (cid:48) , c (cid:2) ˜ E | u | ( L | ˜ X | T ∂ ( N )) − ˜ E | u (cid:48) | ( L | ˜ X | T ∂ ( N )) (cid:3) . (2.32)We now aim to take N → ∞ , and show that the last term of (2.32) vanishes, which wouldimply the result of Proposition 2.8 in the case v = 0. Crucially, by the Markov property of | ˜ X | , thelast term on the right hand side of (2.31) depends only on u in so far as the probability to reach N before T ∂ depends on u . That is,˜ E | u | ( L | ˜ X | T ∂ ( N )) = ˜ P u ( T N < T ∂ ) E N ( L | ˜ X | T ∂ ( N )) . Furthermore, note that 27
Each time the walk is at ± N , there is a fixed positive chance that the walk will be absorbedbefore returning to the boundary (and a vanishing chance that it will reach the other end ofthe boundary), hence E N ( L | ˜ X | T ∂ ( N )) converges to a fixed limit as N → ∞ . • As N → ∞ , and u, u (cid:48) are fixed, then lim N →∞ ˜ P u ( T N < T ∂ ) exists and does not depend on u .This can be seen e.g. from renewal theory.Taken together, these two points imply that the limit as N → ∞ of ˜ E | u | ( L | ˜ X | T ∂ ( N )) exists and doesnot depend on u . This concludes the case v = 0.In the general case where v is arbitrary and fixed, we still get a martingale˜ M n = α ( ˜ X n , v ) − ˜ L n ( v ) + ˜ A (cid:48) n , ≤ n ≤ T ∂ but the form of the error ˜ A (cid:48) n needs to be slightly adjusted compared to ˜ A n , since the values E x ( α ( ˜ X , v )) − α ( x, v ) (2.33)are no longer the same for x = N and x = − N . To deal with this and later arguments, we remarkthat we can replace v by 0 in the following manner: Lemma 2.9. As | x | → ∞ , then α ( x, v ) = α ( x, − sgn( x ) | v | σ + o (1) . Proof.
This is straightforward from (2.11).In particular, using Lemma 2.9, the limits of (2.33) for x = − N and x = N exist and coincidewith c . Consequently, we deduce that the error term in the martingale ˜ M n has the form˜ A (cid:48) n = ( c + o (1))( ˜ L n ( − N ) + ˜ L n ( N )) = ( c + o (1)) L | ˜ X | n ( N ) = (1 + o (1)) ˜ A n , (2.34)where the o (1) term tends to 0 as N → ∞ (but is not random and does not depend on n ). Applyingthe optional stopping theorem at T ∂ to the martingale ˜ M n , we deduce (using (2.34) and Lemma 2.9one more time) that α ( u, v ) = E u ( α ( ˜ X T ∂ , v )) − ˜ g N ( u, v ) + ˜ E u ( ˜ A (cid:48) T ∂ )= E u ( α ( ˜ X T ∂ , − sgn( ˜ X T ∂ ) | v | σ + o (1)) − ˜ g N ( u, v ) + (1 + o (1)) E u ( ˜ A T ∂ )= α N +1 − | v | σ E u (sgn( ˜ X T ∂ )) + o (1) − ˜ g N ( u, v ) + (1 + o (1)) E u ( ˜ A T ∂ ) . (2.35)Now, it is clear that as N → ∞ , E u (sgn( ˜ X T ∂ )) → X there is probability tending to one to hit zero before T ∂ , after which thesign is equally to be positive or negative by symmetry. We deduce from (2.35) that as N → ∞ , α ( u, v ) = α N +1 + o (1) − ˜ g N ( u, v ) + (1 + o (1)) E u ( ˜ A T ∂ ) . (2.36)Since we have already verified that the limit of E u ( ˜ A T ∂ ) as N → ∞ exists and does not dependon u , we conclude the proof of Proposition 2.8 by taking the difference in (2.36) for u and u (cid:48) andletting N → ∞ . 28 Infinite volume limit
In the previous section we showed that D − N (and hence K − N ) has a limit as N → ∞ which isgiven in terms of two Green’s functions G odd and G even associated to random walks on V odd ( G ) and V even ( G ) which may jump along V ( G ) and V ( G ), and with various boundary conditions (Dirichletor mixed Neumann–Dirichlet) on ∂ G \ ( V ( G ) ∪ V ( G )). Let us also denote these Green’s functionsby G G odd and G G even to emphasize their dependence on G .The purpose of this section is to take an infinite volume limit as G tends to the upper half-plane. In this limit the Green’s functions G G odd and G G even diverge (corresponding to the fact that thelimiting bulk effective random walk is recurrent). However, we can still make sense of its potentialkernel. Hence the inverse Kasteleyn matrix, which is obtained as a derivative of these Green’sfunctions, has a well defined pointwise limit.The argument for this convergence as G increases to the upper half plane are essentially thesame for both the odd and even walks. As will be clear from the proof below, the arguments relyonly on the fact that (a) the two walks coincide with the usual simple random walk (with jumps ofsize 2) away from the real line, (b) they are reflected on the real line with some jump probabilitiesthat decay exponentially fast with the jump size (in fact, in the even case the jumps are bounded),and (c) they can ‘switch colour’ with positive probability along the real line. This terminology willbe explained below. For these reasons, and in order to avoid unnecessarily cumbersome notation,we focus in this section solely on the odd walk (the argument works literally in the same way forthe even case, and can in fact be made a little easier). We write Γ for the weighted graph corresponding to the odd effective random walk. Thus, thevertex set V of Γ can be identified (after translation so that V ⊂ R ) with ( Z × Z ) ∩ H and itsedges E are those of (2 Z ) , plus those of (2 Z + 1) × (2 Z ), plus additional edges connecting thesetwo lattices along the real lines. In reality, it will be easier to consider a symmetrised version of Γobtained by taking the vertex set to be V ∪ ¯ V and the edges to be E ∪ ¯ E , where ¯ V and ¯ E are thecomplex conjugates of V and E . We will still denote this graph by Γ. Throughout this and thenext section the random walks we will consider will take values in this symmetrised graph. Notethat Γ is not locally finite: any vertex on the real line has infinite degree, but the total weight outof every vertex is finite (and is equal to 1). We recall that when away from the real line, the randomwalk on Γ looks like simple random walk on the square lattice up to factor 2 : the transitions froma point x ∈ Z away from R are to the four points x ± e or x ± e , where ( e , e ) is the standardbasis of Z . On the real line, the effective random walk can make jumps of any size, but the jumpsare symmetric and the transition probabilities have an exponential tail. Note that the odd effectiverandom walk only jumps between vertices of the same color in the bulk, and can possibly changecolor only on the real line. In the current section, we will also use the word class to denote thenotion of color. Finally, we say that two vertices in Γ have the same parity (or periodicity ) ifthe differences of their vertical and horizontal coordinates are multiples of 4.Our first goal will be to show that differences of Green’s functions evaluated at two differentvertices of the same class for the walk killed when leaving a large box, converge (when the box tendsto infinity) to differences of the potential kernel of the walk on the infinite graph Γ. Our firsttask will be to define this potential kernel. For the usual simple random walk on Z this is an easytask because the asymptotics of the transition probabilities are known with great precision. In turn29his is because simple random walk can be written as a sum of i.i.d. random variables making itpossible to use tools from Fourier analysis: see Chapter 4 of [28] for a thorough introduction. Thewalk on Γ obviously does not have this structure, and in fact it seems that there are few generaltools for the construction of the potential kernel for walks on a planar graph beyond the i.i.d. case.The coupling arguments we introduce below may therefore be of independent interest.Let P denote the transition matrix of simple random walk on Γ, and let ˜ P = ( I + P ) / G ( x, y ) = 2 G ( x, y ) for any x, y , if G and ˜ G are thecorresponding Green’s functions (this is because the jump chains are the same, and the lazy chainstays on average twice as long at any vertex as the non-lazy chain).The basic idea for the definition of the potential kernel will be the following. Let X and X (cid:48) denote (lazy) random walks started respectively from two vertices x and x (cid:48) of the same class, andsuppose that they are coupled in a certain way so that after a random time T (which may beinfinite), X and X (cid:48) remain equal forever on the event that T < ∞ : that is, X T + s = X (cid:48) T + s , s ≥ . (3.1)We will define a coupling (its precise definition will be given below) that depends on a time-parameter t such that for this particular value of t , P ( T > t ) (cid:46) (log t ) a t − / (3.2)for some a > T , since T depends on t ; indeed T might be infinite withpositive probability). In fact, a much weaker control of the form P ( T > t ) (cid:46) t − ε , for some ε > p t ( x, o )to ˜ p t ( x (cid:48) , o ) which is why T is allowed to depend on t , and why we only require T to be less than t with high probability (but we do not care what happens on the event { T > t } ). Here and later on, o denotes an arbitrary fixed vertex.We first argue that we can get a good a priori control on the transition probabilities ˜ p t ( x, o ).Let A ⊂ Z × Z be a finite set. By ignoring the long range edges which may leave A through the realline, and using the standard discrete isoperimetric inequality on Z (Loomis-Whitney inequality,Theorem 6.22 in [29]) it is clear that (cid:88) x ∈ A,y ∈ A c w x,y (cid:38) | A | / where w x,y is the weight of the edge ( x, y ) in Γ. This means that Γ satisfies the two-dimensionalisoperimetric inequality ( I ) (we here use the notation of [2]). Consequently, by Theorem 3.7,Lemma 3.9 and Theorem 3.14 of [2], Γ satisfies the two-dimensional Nash inequality , ( N ).Therefore, if q xs ( · ) denote the transition probabilities of the continuous time walk on Γ, normalisedby its invariant measure, we have by Theorem 4.3 in [2] that q xs ( x ) (cid:46) /s q xs is maximised on the diagonal, we deduce that˜ p s ( x, o ) (cid:46) /s, (3.3)where the implied constant is uniform in x, o and s ≥ ∞ (cid:88) t =0 (˜ p t ( x, o ) − ˜ p t ( x (cid:48) , o )) (3.4)converges. We couple the walks starting from x, x (cid:48) according to (3.1). Obviously, on the event { T ≤ t/ } , X t = o if and only if X (cid:48) t = o , and thus | ˜ p t ( x, o ) − ˜ p t ( x (cid:48) , o ) | ≤ P ( T ≥ t/
2) max y ˜ p t/ ( y, o ) (cid:46) t − / (log t ) a (3.5)which is summable, whence the series (3.4) converges. Definition 3.1.
We set ˜ a ( x, o ) − ˜ a ( x (cid:48) , o ) = − ∞ (cid:88) t =0 (˜ p t ( x, o ) − ˜ p t ( x (cid:48) , o )) . By convention we define ˜ a ( o, o ) = 0 and so this recipe may be used to define ˜ a ( x, o ) provided that x and o are of the same class (by summing increments along a given path from x to o ). (As thechoice of a path from x to o does not matter before the limit in the series is taken, this is welldefined.) Since x and o are arbitrary vertices of the same class, this defines ˜ a ( · , · ) everywhere onthis class. If also (3.2) holds for one pair x, x (cid:48) not of the same class, then this defines ˜ a ( · , · ) over the entiregraph. Note also that due to the fact that π ( x ) = 1 is a constant reversible measure on Γ (hence˜ p k ( x, y ) = ˜ p k ( y, x )), the potential kernel is symmetric: ˜ a ( x, y ) = ˜ a ( y, x ) for any x, y . We will nothowever need this property in the following.In the next subsection we describe a concrete coupling which will be used for the constructionof the potential kernel. We call this the coordinatewise mirror coupling , which is a variationon a classical coupling for Brownian motion in R d . We will then use this coupling again to obtain a priori estimates on the potential kernel.Before describing this coupling and justifying (3.2), we first state and prove a lemma which willbe useful in many places in the the following and which gives a subdiffusive estimate on the walk.Let dist denote the usual (cid:96) distance (graph distance) on Z . Lemma 3.2.
Let x be a vertex of Γ and let T R = inf { n ≥ X n , x ) ≥ R } . Then for every c > there exists c > such that for any n ≥ , and for any R ≥ c √ n log n , P ( T R ≤ n ) (cid:46) exp( − c (log n ) ) . The arguments in this section rely on thinking of ˜ a ( · , · ) as a function of the first variable while the second isfrozen, which is why we prefer to use x for the first variable and o for the second. In the next section, both variableswill start playing a more symmetric role and we will switch to x and y . roof. One possibility would be to use a result of Folz [13] (based on work of Grigor’yan [16] in thecontinuum) which shows that an on-diagonal bound on the heat kernel p t ( x, x ) and p t ( y, y ) impliesa Gaussian upper bound on the off-diagonal term p t ( x, y ). However, it is more elementary to usethe following martingale argument. We may write X n = ( u n , v n ) in coordinate form. Since ( v n )is a lazy simple random walk on the integers, the proof is elementary in this case (and of coursealso follows from the more complicated estimate below). We therefore concentrate on bounding (cid:80) ni =1 P ( | u i | ≥ R ). We bound P ( | u i | ≥ R ) for 1 ≤ i ≤ n as follows: either there is one jump largerthan say K = (log n ) by time n (this has probability at most n exp( − c (log n ) ) by a union boundand exponential tail of the jumps) or if all the jumps are less than K , then u coincides with amartingale ¯ u such that all its jumps are bounded by K in absolute value: indeed, we simply replaceevery jump of u greater than K in absolute value by a jump of the same sign and of length K . Sincethe jump distribution (2.6) is symmetric, the resulting sum ¯ u n is again a martingale. Furthermore,¯ u n is a martingale with bounded jumps. We may apply Freedman’s inequality [14, Proposition(2.1)] to it which implies (since the quadratic variation of ¯ u at time 1 ≤ i ≤ n is bounded by b (cid:46) n ), P ( | ¯ u i | (cid:38) √ n log n ) (cid:46) exp (cid:18) − c n (log n ) (log n ) √ n log n + n (cid:19) (cid:46) exp( − c (log n ) ) . (3.6)The result follows by summing over 1 ≤ i ≤ n . Let x, x (cid:48) be two vertices of the graph Γ of the same class, and let ˜ X, ˜ X (cid:48) be two (lazy) effectiverandom walks started from x and x (cid:48) respectively. In the coupling we will describe below, it willbe important to first fix the vertical coordinate (stage 1). The coupling ends when we also fix thehorizontal coordinate (stage 4). In between, we have two short stages (possibly instantaneous),where we make sure the class is correct (stage 2) followed by a so-called “burn-in” phase where thewalks get far away from the real line in parallel (stage 3). This depends on a parameter r , which isa free choice. (When we prove (3.2) we will choose r to be slightly smaller by logarithmic factorsthan √ t ).We need to do so while respecting the natural parity (i.e., periodicity) of the coordinates weare trying to match. We will use the laziness to our advantage in order to deal with the potentialissues arising from the walks not being of the same parity.Note the following important property of ˜ P . At each step, the walk moves with probability 1/2.Conditionally on moving, the horizontal coordinate moves with probability 1/2, and otherwise thevertical coordinate moves (and in that case it is equally likely to go up or down by two); since wesymmetrised Γ note also that ˜ p ( x, x + y ) and ˜ p ( x, x − y ) are always equal, for all x, y ∈ Z (i.e., thejump distribution is symmetric). We will need a fair coin C to decide which of the two C oordinatesmoves (if moving), and another fair coin L to decide whether the walk is L azy or moves in this step. Stage 1: vertical coordinate.
Suppose that ˜ X t = ( u t , v t ) , ˜ X (cid:48) t = ( u (cid:48) t , v (cid:48) t ) are given. We nowdescribe one step of the coupling. If v t = v (cid:48) t move to stage 2. If v t (cid:54) = v (cid:48) t then we consider thefollowing two cases. In any case, we start by tossing C . If heads, then we plan for both ˜ X and ˜ X (cid:48) to move their horizontal coordinates, and if tails, for both their vertical coordinates.1. Case 1: v t − v (cid:48) t = 2 mod 4. Suppose C is tails so the parity of vertical coordinate has achance to be improved. Then we toss L . Depending on the result, one stays put and the other32oves, or vice versa (either way the vertical coordinates are of the same parity after, and willstay so forever after). If instead C was heads, so horizontal coordinate moves for both walks,then they move simultaneously or stay put simultaneously, and move independently of oneanother if at all.2. Case 2: v t − v (cid:48) t = 0 mod 4. Suppose C is tails, so the vertical coordinates have a chance tobe improved or even matched. Then we toss L and according to the result they both movesimultaneously or stay put simultaneously. If moving at all, we declare the change in v t andthe change in v (cid:48) t to be opposite one another: thus, v t +1 = v t ± v (cid:48) t +1 = v (cid:48) t ∓
2. If however C is heads (so the horizontal coordinates move), then the walksmove simultaneously or stay put simultaneously, and move independently of one another ifat all.We leave it to the reader to check that this is a valid coupling (all moves are balanced andaccording to the transition probabilities P if moving, and altogether each walk moves or stays putwith probability 1/2 as desired). As mentioned, once the parity of the vertical coordinates of thewalks is matched (meaning the difference in vertical coordinates is even), it will remain matchedforever.Note also that once the vertical parity is matched ( v t − v (cid:48) t = 0 mod 4), conditionally on thevertical coordinate moving (which is then the case for both walks simultaneously), the direction ofmovements is opposite: in other words, the positions of the vertical coordinates v t and v (cid:48) t throughouttime and until they match are mirrors of one another, with a reflection axis which is a horizontalline L . This line can be described as having a vertical coordinate equal to the average of v t and v (cid:48) t at the first time t that the parity of v t and v (cid:48) t matches (note that L goes via (2 Z ) ). In particular,the two coordinates v t and v (cid:48) t will match after the first hitting time T of the line L . By the endof the first stage, the two walks sit on the same horizontal line. This will remain so forever.
Stage 2: setting class and/or periodicity.
We now aim to match the horizontal coordinate.If also u t = u (cid:48) t the coupling is over and we let ˜ X (cid:48) t +1 = ˜ X t +1 chosen according to ˜ P ( ˜ X t , · ). Howeverthe two walks might not be in the same class at that point, even if they started in the same class atthe beginning of stage 1 (their class might change if one hits the real line but not the other duringthat stage). During stage 2, we will make sure the walks become of the same class if they werenot at the beginning of that stage (amounting to u t − u (cid:48) t even), and we will also make sure thatthey become of the same “parity” or “periodicity”, meaning u t − u (cid:48) t = 0 mod 4. If that is the casealready at the beginning of this stage, we can immediately move on to the next stage.Otherwise, as before, suppose that ˜ X t = ( u t , v t ) , ˜ X (cid:48) t = ( u (cid:48) t , v (cid:48) t ) are given, and suppose that v t = v (cid:48) t . (In particular, v t = 0 if and only if v (cid:48) t = 0.) We proceed as follows. As before, in anycase we start by tossing C . If heads, then we plan for both ˜ X and ˜ X (cid:48) to move their horizontalcoordinates, and if tails, for both their vertical coordinates. In the latter case, we will use the samemoves for both ˜ X and ˜ X (cid:48) , so we only describe what happens if the move is horizontal.1. If u t − u (cid:48) t is odd, and v t = v (cid:48) t (cid:54) = 0, then the walks move simultaneously and in parallel.2. In all other situations, one walk will stay put while the other moves, or vice-versa, dependingon the outcome of L . 33e make a few comments. First, note that with every visit to the real line there is a fixedpositive chance to have u t − u (cid:48) t = 0 mod 4 and hence to end this stage. Also, if u t − u (cid:48) t is even tobegin with, then there is also a fixed positive chance to end the stage right away. Stage 3: burn-in.
In stage 3 of the coupling, we let the walks evolve in parallel (i.e., with thesame jumps) until they are at distance r from the real line. We will later choose r as a function of t (see (3.7)), which explains our comment under (3.2) that T depends on t . This is a valid choiceof coupling since they will hit the real line simultaneously. At the end of stage 2, the walks are onthe same horizontal line and of the same “periodicity” meaning that they are 0 mod 4 apart. Thiswill remain so until the end of stage 3. Stage 4: horizontal coordinate.
As before, suppose that ˜ X t = ( u t , v t ) , ˜ X (cid:48) t = ( u (cid:48) t , v (cid:48) t ) are given,and suppose that v t = v (cid:48) t . (In particular, v t = 0 if and only if v (cid:48) t = 0.) If also u t = u (cid:48) t we let˜ X (cid:48) t +1 = ˜ X t +1 chosen according to ˜ P ( ˜ X t , · ). Otherwise we proceed as follows; we only describe away of coupling the walks until hitting the real line; if coupling has not occurred before then we saythat T = ∞ . As before, in any case we start by tossing C . If tails, we let both walk evolve verticallyin parallel. Otherwise, the walks will move their horizontal coordinates or stay put simultaneouslydepending on the result of L . If both walks move horizontally, then let u t and u (cid:48) t move in oppositemanners, i.e., u t +1 − u t = − ( u (cid:48) t +1 − u (cid:48) t ). This is possible by symmetry of the jump distribution P (even on the real line).Again, we leave it to the reader to check that what we have described in stages 2,3 and 4 formsa valid coupling. We note that any movement in the vertical coordinate is replicated across bothwalks, whatever the cases, and so the match created in stage 1 is never destroyed. Note also thatonce the walks are 0 mod 4 apart, this remains the case until hitting the real line. Therefore themovement of the horizontal coordinates of both walks in stage 2 of this coupling will also be mirroroff one another, with the mirror being a vertical line L whose horizontal coordinate is the averageof u t and u (cid:48) t at the end of stage 3. We call T , . . . , T the end of each four stage respectively (with T being infinity if the walks hit the real line first). (3.2) ) In order to use the above coupling to construct the potential kernel of the walk on Γ, we needto verify two points. We will consider two cases: the main one is that x and x (cid:48) are of the sameclass and dist( x, x (cid:48) ) = 2. The other case is if x, x (cid:48) are on the real line and dist( x, x (cid:48) ) = 1. ByDefinition 3.1, these two cases allow us to define the potential kernel over the entire graph. We willfocus on the first case since it is a bit more involved than the second (which can be checked in asimilar manner). We will first need to verify (3.2), which requires that the two walks coincide withhigh probability at time t .We will check that each stage lasts less than t/ Stage 1.
We may assume without loss of generality that v − v (cid:48) = 0 mod 4 since otherwise ittakes a geometric number of attempts until that is the case. Note then that P ( T > t/
4) is boundedby the probability that the random walk avoids the (horizontal) reflection line L of stage 1 for time t/
4. As the vertical coordinate performs a lazy simple random walk on the integers (with lazinessparameter 3 /
4) this is bounded by the probability that a random walk on the integers starting from34 (or more generally a random value with geometric tails, as discussed above) avoids 0 for at least (cid:38) t , which is bounded by (cid:46) / √ t by gambler’s ruin arguments (see e.g. Proposition 5.1.5 in [28]). Stage 2.
Let k = dist( x, R )(= | v | ). If the walks remained of the same class during the firststage, then stage 2 is over in a time which has a geometric tail so (3.2) holds trivially. On the otherhand, if they did change class during the first stage, it is necessary to hit the real line again (andthen wait for an extra time with geometric tail).At the end of stage 1, the walk is on the reflection line L which has vertical coordinate v + O (1)and so is again at distance k + O (1) from the real line. Let T R denote the hitting time of R . Thenby Proposition 5.1.5 in [28] again, P ( T R > t/ (cid:46) k √ t . On the other hand, the probability that ˜ X T and ˜ X (cid:48) T changed class during the first phase is boundedby (cid:46) /k again by gambler’s ruin (since it requires touching the real line before the reflection line L ), and so P ( T − T > t/
4; ˜ X T (cid:54)∼ ˜ X (cid:48) T ) (cid:46) k × k √ t = √ t , where (cid:54)∼ denotes being of different class. This implies (3.2) for T − T . Stage 3.
Here we will need to choose the parameter r appropriately. We will take it to be r = √ t (log t ) b , (3.7)where b > r units of time, if the walk starts in thestrip S of width r around the real line, it has a positive probability, say p , of leaving S (where p does not depend on the starting point of the walk). Thus for j ≥ P ( T − T > jr ) ≤ (1 − p ) j . Hence P ( T − T > t/ ≤ exp( − ct/r )so that if b > (cid:46) / √ t , as desired. Stage 4.
To prove the corresponding bound in stage 4, we need the following lemma whichshows (up to unimportant logarithmic terms) the horizontal displacement accumulated in the firststage has a Cauchy tail. This corresponds of course to the well known fact that the density ofBrownian motion when it hits a fixed line has exactly a Cauchy distribution.
Lemma 3.3.
We have P ( sup t ≤ T | u t − u | ≥ k ) (cid:46) (log k ) k . (3.8)(The factor of (log k ) is not optimal in the right hand side of (3.8) but is sufficient for ourpurposes.) We now use this to derive a bound for P ( T − T > t/ L which has the same tail as in Lemma 3.3 (this is because the additionaldiscrepancy accumulated during stage 2 is easily shown to have geometric tail). Let us conditionon everything before time T , and call k the distance of the walk X at time T to the reflection line.Let T R denote the hitting time of the real line and let T L denote the hitting time of the reflectionline L . Set s = t/ T L in terms of the usual simple35andom walk on the lattice (without extra jumps on the real line). Indeed, until time T R , the walkcoincides with the usual lazy simple random walk on the square lattice. Writing Q for the law ofthe latter random walk, we have P ( T − T > s | F T ) ≤ P ( T L > s, T R > s | F T ) + P ( T L > T R , T R ≤ s | F T ) ≤ Q ( T L > s, T R > s ) + Q ( T R < T L ) (cid:46) Q ( T L > s ) + kr (cid:46) k ( √ t + r ) . To go from the second line from the third line, we used that the walk starts at distance r from thereal line and Proposition 5.1.5 in [28], and to go the last line we also used that same result. Takingexpectations (we only use the above bound if k ≤ r so that the right hand side is less than one,and we use the trivial bound 1 for the probability on the left-hand side otherwise), we see that P ( T − T > t/ ≤ E ( X { X ≤ r } )( √ t + r ) + P ( X > r )where X has a tail bounded by Lemma 3.3. By Fubini’s theorem, P ( T − T > t/ (cid:46) (log t ) b √ t (3.9)so we get (3.2) with a = 3 + b . Since b > a > Proof of Lemma 3.3.
Let L = L be the (horizontal) reflection line. We wish to show that P (sup t ≤ T L | u t − u | ≥ k ) (cid:46) (log k ) /k . Without loss of generality we assume that v < v (cid:48) so˜ X starts below L , and u = 0. Let L (cid:48) be a line parallel to L below L , at distance A from it, where A = (cid:98) k/ (log k ) (cid:99) . Let S denote the infinite strip in between these two lines. Let T = T L denotethe hitting time of L and let T (cid:48) denote the hitting time of L (cid:48) , and let τ = T ∧ T (cid:48) denote the timeat which the walk leaves the inside of the strip S . Let T k denote the first time at which | u t | ≥ k .Then P (sup t ≤ T | u t | ≥ k ) ≤ P ( T (cid:48) < T, sup t ≤ T | u t | ≥ k ) + P ( T (cid:48) > T, sup t ≤ T | u t | ≥ k ) ≤ P ( T (cid:48) < T ) + P ( T k ≤ τ ) . Now, the event T (cid:48) < T concerns only the vertical coordinate which (ignoring the times at which itdoesn’t move which are irrelevant here) is simple random walk on Z . Hence P ( T (cid:48) < T ) = 1 /A (cid:46) (log k ) /k by the gambler’s ruin estimate in one dimension for simple random walk.It remains to show that P ( T k ≤ τ ) = o ((log k ) /k ). We split the event into two events, andshow both are overwhelmingly unlikely. We observe that for T k ≤ τ to occur, one of the followingtwo events must occur: either (i) T k ≤ n := k / (log k ) , or (ii) τ > n . Let E be the first event andlet E be the second one. Then by Lemma 3.2, P ( E ) (cid:46) exp( − c (log k ) ) (3.10)for some constant c >
0. As for the second event E , we note that every A units of time there isa positive chance to leave S (this is a trivial consequence of the fact that the vertical coordinate islazy random walk on Z , with the laziness parameter equal to 1 / / / P ( E ) ≤ exp( − c nA ) = exp( − c (log k ) )36ince A = k/ (log k ) and n = k / (log k ) . Thus P ( T k ≤ τ ) ≤ P ( E ) + P ( E ) (cid:46) exp( − c (log k ) ) = o ( k − ) , and (3.8) follows. The purpose of this section is to show the following estimate. This will be useful both for provingthat Green’s functions differences converge to differences of the potential kernel in the limit of largebox G n , n → ∞ , but also as an input to the proof of the scaling limit result for the height function,where such an a priori estimate is needed for the inverse Kasteleyn matrix. Proposition 3.4.
Let o, x, x (cid:48) be any vertices of Γ such that dist ( o, x ) = R and such that x, x (cid:48) areof the same class with dist( x, x (cid:48) ) = 2 . Then for R ≥ , | ˜ a ( x, o ) − ˜ a ( x (cid:48) , o ) | (cid:46) (log R ) c R where c = a + 2 > , and a > is as in (3.2) .Proof. Let t = R / (log R ) . We note that for s ≤ t ,˜ p s ( x, o ) ≤ P o ( d ( X s , o ) ≥ R ) ≤ P x ( | u s − u | ≥ R/
2) + P ( | v s − v | ≥ R/ . Both terms are easily estimated. The first term is estimated by Lemma 3.2 which shows it isbounded exp( − (log t ) ). The same estimate holds (and is of course easier) for the vertical coordi-nate, since this is simply lazy simple random walk (with laziness parameter 3 / x (cid:48) in place of x . Thus (cid:12)(cid:12)(cid:12) t (cid:88) s =0 ˜ p s ( x, o ) − ˜ p s ( x (cid:48) , o ) (cid:12)(cid:12)(cid:12) (cid:46) exp( − (log R ) ) . (3.11)On the other hand, for s ≥ t , we recall that | ˜ p s ( x, o ) − ˜ p s ( x (cid:48) , o ) | ≤ s − / (log s ) a , by (3.2). Summing over s ≥ t , (cid:12)(cid:12)(cid:12) ∞ (cid:88) s = t ˜ p s ( x, o ) − ˜ p s ( x (cid:48) , o ) (cid:12)(cid:12)(cid:12) (cid:46) t − / (log t ) a (cid:46) R − (log R ) a +2 . (3.12)Combining with (3.11) this finishes the proof with c = a + 2 as desired.37 .5 Convergence of Green’s function differences to gradient of potential kernel Let B R = B (0 , R ). Define the unnormalised Green’s function˜ G R ( x, o ) = E x (cid:16) ∞ (cid:88) n =0 { ˜ X n = o, τ R > n } (cid:17) , where τ R is the first time that the (lazy) walk ˜ X leaves B R . We will prove the following proposition: Proposition 3.5. As R → ∞ , for any fixed x, x (cid:48) of the same class, and any fixed y , ˜ G R ( x, o ) − ˜ G R ( x (cid:48) , o ) → − (˜ a ( x, o ) − ˜ a ( x (cid:48) , o )) . As a consequence, the same convergence is true also for the nonlazy walk X instead of ˜ X . The proof is based on ideas similar to Proposition 4.6.3 in [28]. We first recall the followinglemma which (in the case of finite range irreducible symmetric random walk would be Proposition4.6.2 in [28]):
Lemma 3.6.
For any x, o , we have ˜ G R ( x, o ) = E x (˜ a ( ˜ X τ R , o )) − ˜ a ( x, o ) . Proof.
The proof is simply an application of the optional stopping theorem for the martingale M n = ˜ a ( ˜ X n , o ) − L ˜ Xn ( o ), where L ˜ Xn ( o ) = (cid:80) nm =0 { ˜ X m = o } denote the local time of ˜ X at o by time n .The application of the optional stopping is first done at time τ R ∧ n which is bounded. The limitwhen n → ∞ can be taken by dominated convergence for the first term and monotone convergencefor the second. In fact, for the application of the dominated convergence theorem, one must be alittle more careful than with simple random walk, since when leaving B R , there is an unboundedset of possibilities for ˜ X τ R . However the jump probabilities decay exponentially and ˜ a ( x, o ) growsat most like (log | x − o | ) c as x → ∞ by Proposition 3.4 (note here that x, x (cid:48) and o is fixed while R → ∞ ). This makes the application of the dominated convergence justified. We give full detailsof this argument for the sake of completeness.To this end, note that | ˜ X n ∧ τ R | ≤ | ˜ X τ R | almost surely. Let B (cid:48) R ⊆ B R be the set of verticesconnected by an edge to the outside of B R , and let τ (cid:48) R be the first hitting time of B (cid:48) R . By the strongMarkov property we get E x ( | ˜ X τ R | ) = (cid:88) z ∈ B (cid:48) R P x ( ˜ X τ (cid:48) R = z ) E z ( | ˜ X τ R | )and E z ( | ˜ X τ R | ) ≤ P z ( τ R = 1) E z ( | ˜ X τ R | | τ R = 1) + P z ( τ R >
1) max w ∈ B R E w ( | ˜ X τ R | ) . Plugging the latter into the former and taking the maximum over z ∈ B (cid:48) R , we obtain for all x ∈ B R , E x ( | ˜ X τ R | ) ≤ max z ∈ B (cid:48) R E z ( | ˜ X τ R | | τ R = 1) + max z ∈ B (cid:48) R P z ( τ R >
1) max w ∈ B R E w ( | ˜ X τ R | ) . x ∈ B R , we arrive at E x ( | ˜ X τ R | ) ≤ − max z ∈ B (cid:48) R P z ( τ R >
1) max z ∈ B (cid:48) R E z ( | ˜ X τ R | | τ R = 1) . The quantities on the right hand side are clearly finite due to exponentially decaying probabilitiesfor the jumps of ˜ X and the fact that z ∈ B (cid:48) R . Moreover the maximums are taken over a finiteset. This together with the fact that ˜ a ( x, o ) (cid:46) (log | x − o | ) c (cid:46) | x | + | o | as x → ∞ completes theproof. Proof of Proposition 3.5.
By Lemma 3.6, we have˜ G R ( x, o ) − ˜ G R ( x (cid:48) , o ) = − (˜ a ( x, o ) − ˜ a ( x (cid:48) , o )) + E x (˜ a ( ˜ X τ R , o )) − E x (cid:48) (˜ a ( ˜ X τ R , o )) , so it suffices to prove E x (˜ a ( ˜ X τ R , o )) − E x (cid:48) (˜ a ( ˜ X τ R , o )) → R → ∞ . This will follow rather simply from our coupling arguments, where we will choose theparameter r in the stage 3 of the coupling to be R/ (log R ) .Reasoning as in (3.9), we see that P x ( τ R < T ) (cid:46) (log R ) d R (3.14)for some d > R → ∞ while x, x (cid:48) are fixed of the same class. Since we already know fromProposition 3.4 that a ( x, o ) grows at most like log( | x − o | ) c , (3.14) implies that the difference ofexpectations in the left hand side of (3.13) is at most O ((log R ) c + d /R ) and so tends to zero as R → ∞ .We will now consider random walks which are killed on a portion of the boundary of a largebox Λ R but may have different (e.g., reflecting) boundary conditions on other portions of theboundary. We will show that the same result as Proposition 3.5 holds provided that the Dirichletboundary conditions are, roughly speaking, macroscopic. More precisely, let Λ R ⊂ Z be such that B (0 , R ) ⊂ Λ R . Let ∂ Λ R denote its (inner) vertex boundary, and let ∂ D Λ R denote a subset of ∂ Λ R .Suppose that ˜ X Λ is a (lazy) random walk with transitions given by ˜ p ( x, y ) if x, y ∈ Λ R and supposethat the walk is absorbed on ∂ D Λ R . We suppose that ∂ D Λ R is such that from every vertex in Λ R , ∂ D Λ R contains a straight line segment of length αR , and at distance at most α − R from x , where α > G n we consider (consider blue/yellow vertices separately) in Theorem 1.2. Indeed, the (approximaterectangles) G n are constructed in such a way the both the the odd and even effective bulk randomwalks are killed on half of the upper side of G n .We do not specify the transition probabilities for ˜ X Λ when it is on ∂ Λ R \ ∂ D Λ R . Let ˜ G Λ R ( x, o ) = E x ( (cid:80) ∞ n =0 { ˜ X Λ Rn = o } ) denote the corresponding unnormalised Green’s function. Proposition 3.7. As R → ∞ , for any fixed x, x (cid:48) of the same class and any fixed o , ˜ G Λ R ( x, o ) − ˜ G Λ R ( x (cid:48) , o ) → − (˜ a ( x, o ) − ˜ a ( x (cid:48) , o )) . As a consequence, the same convergence is true also for the nonlazy walk X instead of ˜ X . roof. For this proof we will need the following lemma, which says that from any point there is agood chance to hit the boundary without returning to the point, whence the expected number ofvisits to that point before hitting the boundary is small.
Lemma 3.8.
There exists a constant such that the following holds for all k ≥ and vertex o of Γ .Let L be a lattice line at distance k from o and of same class as o . Then P o ( T L < T + o ) (cid:38) (log k ) − , (3.15) where T L is the hitting time of L , T + o is the return time to o .Proof of Lemma 3.8. We start by noticing that, up to a factor equal to the total conductance at o ,the probability on the left-hand side is equal to the effective conductance (or inverse of the effectiveresistance R eff ( o ; L )) between o and L . Since the total conductance at o is bounded away from 0and ∞ , it suffices to show that R eff ( o ; L ) (cid:46) log k. This can either be proved directly or by comparison with the analogous estimate on Z throughRayleigh’s monotonicity principle (see Chapter II of [29]). A direct proof is to construct a unit flow θ from o to L and estimating its Dirichlet energy E ( θ ) = (cid:80) e θ ( e ) res( e ), where res( e ) denotes theresistance of e . Such a unit flow can be constructed by the method of random paths, as discussedin (2.17) of [29]: we consider a cone of fixed aperture whose apex is at o and intersects L , thenchoose a line at random in that cone starting at o and whose angle is uniformly selected among theset of possibilities. We get a directed lattice path π from o to L by selecting a lattice path stayingas close as possible to this random line (with ties broken in some arbitrary way), staying on thesame sublattice as o and L . Note that this path never uses long range edge along the real line, andin fact jumps only by ± e i , i = 1 , , at any given steps. A unit flow θ from o to L is obtained bysetting θ ( e ) = P ( e ∈ π ) − P ( − e ∈ π ) (where − e denotes the reverse of the edge e ). Then if e is atdistance j from o , | θ ( e ) | ≤ P ( e ∈ π ) + P ( − e ∈ π ) (cid:46) j since there are O ( j ) edges at distance j . Hence E ( θ ) ≤ k (cid:88) j =1 O ( j ) 1 j (cid:46) log k. Since the effective resistance is smaller than the energy of any flow from o to L , we get the desiredbound.Now let us return to the proof of Proposition 3.7. We apply the full plane coordinatewise mirrorcoupling of Proposition 3.5 (that is, with the parameter r chosen to be R/ (log R ) ), until the time S R one of the walks leaves the ball B R = B R (0). If they have not coupled before S R , we considerthis a failure and will not try to couple them after: we let them evolve independently.Then note that (we write ˜ X for ˜ X Λ R for simplicity)˜ G Λ R ( x, o ) − ˜ G Λ R ( x (cid:48) , o ) = E x ( L ˜ XS R ( o )) − E x (cid:48) ( L ˜ X (cid:48) S R ( o )) (3.16)+ E (cid:0) L ˜ X ( S R , ∞ ) ( o ) − L ˜ X (cid:48) ( S R , ∞ ) ( o ) (cid:1) (3.17)40ote that the term in (3.16) converges to − (˜ a ( x, o ) − ˜ a ( x (cid:48) , o )) by Proposition 3.5. So it sufficesto show that the term in (3.17) converges to zero. However, this is an easy consequence of thefollowing facts: • If the coupling was successful before S R , then the random variable in the expectation of (3.17)is zero. • The probability that the coupling has failed (i.e., that the walks did not meet before leaving B R ) is (cid:46) (log R ) /R , by (3.14). • Conditionally on not having coupled by time S R , the expected number of visits to y after thattime is (cid:46) log R by Lemma 3.8 and by assumption on the Dirichlet part ∂ D Λ R . (In fact, thelemma is stated for hitting an infinite line, but it is easy checked that the argument shows itis a segment of macroscopic size that is being hit with the stated probability).This completes the proof.We apply this to the bulk effective random walk of Section 2.6. This yields the followingcorollary which also concludes the proof of Theorem 1.2. Corollary 3.9.
Let D n be an increasing sequence of domains such that ∪ n D n = Z ∩ H as inTheorem 1.2. Consider the free boundary dimer model on D n with weights as described in Corol-lary 2.7. Then, the inverse Kasteleyn matrix converges pointwise as n → ∞ to a matrix indexedby the vertices of Z ∩ H , called the coupling function , and given in matrix notation by C = − AK ∗ , where A ( u, v ) = 12 D ( v, v ) ˜ a ( u, v ) = 1 D ( v, v ) a ( u, v ) is the normalised potential kernel associated with the effective (odd and even) bulk (nonlazy) randomwalks.In particular, µ n converges weakly as n → ∞ to a law µ which describes a.s. a random monomerdimer configuration on Z ∩ H .Proof. The first part of the statement follows from the random walk representation of K − in finitevolume from Corollary 2.7, the interpretation of K ∗ as a difference operator, and the convergenceof differences of Green’s functions of the bulk effective walk from Proposition 3.7.The convergence in law is a standard application of Kasteleyn theory. Indeed, this followsfrom the fact the local statistics of µ n are described by local functions of the inverse Kasteleynmatrix (which we will for instance recall in Theorem 5.3). It is also clear that µ is supported onmonomer-dimer configurations on Z ∩ H . Let x, y ∈ ¯ G δ := ( δ Z ) . The purpose of this section will be to prove a scaling limit for the discretederivatives of the potential kernel ˜ a ( x, y ) − ˜ a ( x (cid:48) , y ) associated to the lazy (odd) effective randomwalk. (Contrary to the previous section, the second variable will more typically be called y than41 in this section). As mentioned at the beginning of Section 3, the same result holds for both theeven and odd walk, but for convenience (and also because this is a slightly more complicated case)we write our proofs in the odd case. Theorem 4.1.
Let x (cid:48) = x ± δe i ∈ ¯ G δ , i = 1 , , and y ∈ ¯ G δ . Suppose (cid:61) ( x ) (cid:61) ( y ) ≥ andmin ( |(cid:61) ( x ) | , |(cid:61) ( y ) | ) ≥ ρ for some fixed ρ > . Then there exists ε > such that as the mesh size δ → , uniformly over such points x, y , ˜ a ( x (cid:48) , y ) − ˜ a ( x, y ) = π (cid:60) (cid:16) x (cid:48) − xx − ¯ y (cid:17) + o ( δ ε ) if x, y are of different class π (cid:60) (cid:16) x (cid:48) − xx − y (cid:17) − π (cid:60) (cid:16) x (cid:48) − xx − ¯ y (cid:17) + o ( δ ε ) + O ( δ | x − y | ) if x, y are of the same class (4.1)To prove this theorem, we will first show that the potential kernel can be compared to thatof a coloured random walk on the lattice. The coloured random walk is a lazy simple randomwalk on the lattice (2 δ Z ) which carries a black or white colour (in addition to its position).Its position moves like simple random walk on the lattice. It changes colour with some fixedprobability p ∈ (0 ,
1) each time it touches the real line independently of the rest, and otherwiseremains constant. If X is a coloured random walk, we will use σ ( X s ) to denote the colour of thecoloured walk X at time s (and again, this is different from the colour of the vertex X s ): thus, wewill write σ ( X s ) = • if X is black at time s , and σ ( X s ) = ◦ if X is white at time s . Although X s consists both of a position x ∈ (2 δ Z ) and a colour, we will sometimes with an abuse of notationrefer to X s as only a position. Remark 4.2.
We warn the reader that this should not be confused with the black/white colouring(which we call class precisely to avoid confusion) of the vertices of our graph ¯ G δ : indeed, the positionof the coloured walk is in (2 δ Z ) and so its “class” in ¯ G δ remains constant.Note that x and x (cid:48) are necessarily of the same class (hence the same colour). However, y maybe of a different colour. We will choose p to correspond to the probability that the odd effectivewalk makes a jump of odd length when it touches the real line: thus, p = (cid:88) k ∈ Z q ∞ , (2 k +1) e (4.2)where q ∞ is as in (2.9).We will prove the following two results. Let y, x ∈ (2 δ Z ) and choose a colour among {◦ , •} ,say • . Let ˜ a • ( x, y ) denote the potential kernel of the coloured random walk, constructed as inDefinition 2.8 but only counting visits to y with the predetermined colour • : that is,˜ a • ( x, y ) = ∞ (cid:88) s =1 P ( X s = y ; σ ( X s ) = • )where X is a coloured walk starting from x with initial colour • . The fact that the series defining˜ a • converges is an immediate consequence of the arguments in Section 3.3, which apply much moredirectly here.The first result below shows that the potential kernel of the lazy effective walk and of thecoloured walk are quite close to one another, in the sense that the difference in their discretederivatives are of lower order than δ , our target for Theorem 4.1. In the next statement we write y (cid:54)∼ x to denote that x and y are of different class.42 roposition 4.3. Fix ρ > . Let x, y ∈ G δ , and let z = x + δ y (cid:54)∼ x (resp z (cid:48) = x (cid:48) + δ y (cid:54)∼ x ), so that z and z (cid:48) are of the same class as y . Let us write ∇ x f ( x ) for f ( x (cid:48) ) − f ( x ) (resp. ∇ z f ( z ) = f ( z (cid:48) ) − f ( z ) ).Then there exists ε > such that as δ → , |∇ x ˜ a ( x, ¯ y ) − ∇ z ˜ a • ( z, ¯ y ) | (cid:46) δ ε , uniformly over x, y , with min ( (cid:61) ( x ) , (cid:61) ( y )) ≥ ρ . The next proposition says that the potential kernel of the coloured walk is close to 1 / • or ◦ .Moreover in the above setting the walk is forced to touch the real line in order to go from x to ¯ y .Let ˜ b ( x, y ) = ˜ b ( x − y ) denote the potential kernel of lazy simple random walk on (2 δ Z ) . Proposition 4.4.
In the same setting as Proposition 4.3, |∇ x ˜ a • ( z, ¯ y ) − ∇ z ˜ b ( z, ¯ y ) | (cid:46) δ ε , for some ε > .Proof of Theorem 4.1 given Proposition 4.3 and Proposition 4.4. It is enough to combine Proposi-tions 4.3 and 4.4 as well as known estimates on the two-dimensional simple random walk potentialkernel.Let us give a few details. Suppose we are in the first case where x, y are of different class. Thismeans only walks going through the boundary have the possibility to contribute to the potentialkernel. By the reflection symmetry, the walks from x to y going through the boundary have thesame weight as the walks from x to ¯ y . In the full plane for simple random walk, (see e.g. Theorem4.4.4. in [28]), the potential kernel has the form b ( z,
0) = 2 π log | z | + C + o ( | z | − )for some constant C >
0, as z → ∞ . Let us rescale the lattice so that it becomes δ Z , and let usadopt complex notation, so log | x | = (cid:60) (log x ), and let h = x (cid:48) − x = ± δe i . Then b ( x, ¯ y ) − b ( x (cid:48) , ¯ y ) = 2 π (cid:60) (log( x − ¯ y + h ) − log( x − ¯ y )) + o ( δ )= 2 π (cid:60) (cid:18) hx − ¯ y (cid:19) + o ( δ ) . (4.3)Now, multiplying by 2 to account for laziness, and by 1 / x and y are of the same class, there are two types ofeffective random walks to consider: the effective random walks going from x to y in the full planewithout touching the boundary (type I), and those which do touch the boundary (type II). Theeffective random walks of type I can be written as all simple random walks going from x to y in theplane (type III) minus simple random walks going from x to y through the boundary (type IV).By Propositions 4.3 and 4.4, the walks of type IV contribute roughly twice as much as those oftype II. So we have to count walks of type III minus those of type II. Those of type III contribute π (cid:60) ( x (cid:48) − xx − y ) + O ( δ | x − y | ) to the gradient of the potential kernel (the factor in front is twice that of(4.3) due to laziness, the error term comes from Corollary 4.4.5 in [28]). The contribution of typeII on the other hand is exactly counted by the first line of (4.1). This proves Theorem 4.1.43ow we derive the version which is useful for later, which includes folding the plane onto itselfso that the walk is reflected on the real line, and is not lazy. Corollary 4.5.
Let us assume that x (cid:48) = x ± δe i ∈ δ Z ∩ H , i = 1 , . Let y ∈ δ Z ∩ H . Then thereexists ε > such that as the mesh size δ → , uniformly over points x, y such that min ( (cid:61) ( x ) , (cid:61) ( y )) ≥ ρ > , a ( x (cid:48) , y ) − a ( x, y ) = π (cid:60) (cid:16) x (cid:48) − xx − ¯ y (cid:17) + o ( δ ε ) if x, y are of different class, π (cid:60) (cid:16) x (cid:48) − xx − y (cid:17) + o ( δ ε ) + O ( δ | x − y | ) if x, y are of the same class. (4.4) Proof of Corollary 4.5 given Theorem 4.1.
As before the first case (when x, y are of different classes)is easiest to compute. Since the walk is now nonlazy, we need to multiply the values of the potentialkernel by 1 /
2, but also add the walks from x to ¯ y ; both are counted by the same formula in thefirst line of (4.1), and so the factor remains 2 /π overall.In the second case when x, y are of the same class, we note that the number of lazy walks from x to y that don’t touch the boundary are, as observed above, given by π (cid:60) ( x (cid:48) − xx − y ) (type I). On theother hand, when we do the folding, we must add the walks that touch boundary and go from x to y , to those going from x to ¯ y . This gives us one extra group of walks of type II and so these cancel.Multiplying by 1 / We will prove this by coupling. We will need to compare ∇ x ˜ p t ( x, o ) and ∇ z ˜ p • t ( z, o ), where ˜ p • t ( z, o ) = P z ( X t = o, σ ( X t ) = • ) for the coloured walk, where we take o = ¯ y , and z is a vertex chosen as inProposition 4.3. We will see that by coupling our effective walks with coloured walks we will gainan order of magnitude compared with (3.2): that is, we will show that (cid:12)(cid:12) ∇ x ˜ p t ( x, o ) − ∇ z ˜ p • t ( z, o ) (cid:12)(cid:12) ≤ t − / − ε ; t ≤ δ − − ε , (4.5)for some ε >
0. Given (4.5), reasoning as in the proof of Proposition 3.4 (with R = δ − , andusing the improved (4.5) instead of (3.12) in the range up to t = δ − − ε ), we immediately deduceProposition 4.3.We will couple the effective walk X and a coloured walk Z as follows; as in the previous sectionwe work with lazy versions. The coupling will be similar to the one in Section 3.3, but it issimpler since we are allowed to choose the starting point of Z . We will choose z so that X and Z start immediately from the same horizontal line; as in the previous coupling this property willbe preserved forever under the coupling, (so essentially only the last stage, stage 4, needs to bedescribed). More precisely, we set z = x if x and ¯ y are of the same class, and z = x + δ otherwise.In any case Z will always be of the same class as o . Until hitting the real line, we take X and Z to evolve in parallel, with equal jumps. After hitting the real line, we may arrange the coupling sothat they are always on the same horizontal line by always first tossing the C oordinate coin, so thatany movement in the vertical coordinate is replicated for both walks no matter what. Beyond the C oordinate and L aziness coins, we will need a third coin which we use to indicate changes in the44ublattice (for X ) and in colour (for Z ). This coin is only used when the walks are on the real lineand a horizontal movement is to take place. We call this coin P arity. Unlike the other two coins, P arity comes up heads with the fixed probability p ∈ (0 ,
1) from (4.2) which in general is not 1 / C oordinate coin indicates a horizontal movement. Todescribe this, we need to introduce the following stopping times. Let σ = inf { t ≥ X t ∈ R } denote the hitting of R by X (or equivalently by Z ), and let τ = inf { t ≥ σ : (cid:61) ( X t ) ≤ (cid:61) (¯ y ) / } bethe hitting time of the line ∆ = { z ∈ C : (cid:61) ( z ) = (cid:98)(cid:61) (¯ y ) / (cid:99)} by X (or equivalently Z , since X and Z are always on the same horizontal line). Then define σ n , τ n inductively as follows: σ n = inf { t ≥ τ n − : X t ∈ R } ; τ n = inf { t ≥ σ n : X t ∈ ∆ } . Write X t = ( u t , v t ) and Z t = ( u (cid:48) t , v (cid:48) t ) with v t = v (cid:48) t as explained above. • If X t , Z t ∈ R . Toss the P arity coin: if it comes heads, let X t take a jump from its conditionaldistribution given that it is odd, and let Z t change colour and make an independent jump. Ifit is tails, let X t take a jump from its conditional distribution given that it is even, and let Z t keep its current colour and make an independent jump. • Now suppose X t , Z t / ∈ R . If X t , ¯ y are of a different class, then let X t and Z t evolve in parallel(with equal jumps). This will remain so until hitting again the real line, where there will bea chance to change class again. • X t , ¯ y are of the same class, and thus also of the same class as Z t . In that case, the evolutiondepends on whether t ∈ [ σ n , τ n ] for some n ≥ t ∈ ( τ n , σ n +1 ) for some n : If t ∈ [ σ n , τ n ]then the walks evolve in parallel. Otherwise, we use L aziness to first ensure that u t − u (cid:48) t = 0mod 4 δ after a number of steps which has geometric tail. Once that is the case, we let u t and u (cid:48) t evolve in mirror from one another, so ( u t +1 − u t ) = − ( u (cid:48) t +1 − u (cid:48) t ).In general the walks get further from each other during a phase of the form [ σ n , τ n ] but getcloser together again during the phase [ τ n , σ n +1 ]. Note that a visit to o necessarily occurs duringsuch a phase. In fact we will see that typically the walks agree (if they are on the same sublattice)by the time they reach 2∆ or return to R . Furthermore, only a small number of phases need to beconsidered if t ≤ δ − − ε (of order at most δ − ε ). Let us say that a non coupled visit to o occurs attime t if { X t = o }(cid:52){ Z t = o, σ ( Z t ) = •} occurs (where (cid:52) denotes symmetric difference).The coupling between X and X (cid:48) on the one hand, and between X and Z on the other hand,induce a coupling between four processes: X, X (cid:48) (effective walks starting from x, x (cid:48) ) and
Z, Z (cid:48) (coloured walks started from z, z (cid:48) ). Here we take z (cid:48) − z = x (cid:48) − x = δ , as in the statement ofProposition 4.3. The difference between the gradient of the transition probabilities can be writtenas an expectation ∇ x ˜ p t ( x, o ) − ∇ z ˜ p • t ( z, o ) = E ( { X t = o } − { X (cid:48) t = o } − { Z t = o ; σ ( Z t )= •} + { Z (cid:48) t = o ; σ ( Z (cid:48) t )= •} ) (4.6)To get a nonzero contribution it is necessary that X did not couple with X (cid:48) by time ( T R ∧ t/
2) orthat Z did not couple with Z (cid:48) by time t/
2. Both have a probability which is given by (log t ) a /t / bya slight modification of (3.2) (in fact, since the walks start far from the real line, the proof is much45impler than what is given in Section 3.3, and follows directly from gambler’s ruin). Furthermore,given this, it is also necessary that a non coupled visit to o occurs at time t by ( X, Z ) or by ( X (cid:48) , Z (cid:48) ).To estimate the latter conditional probability, we may condition on everything which happeneduntil time T R ∧ t/
2, and we will call s the remaining amount of time until time t , i.e., s = t − ( T R ∧ t/ ∈ [ t/ , t ] so s (cid:16) t . Since at that time the walks have yet not touched the real line, thediscrepancy between X and Z is therefore equal to the initial discrepancy z − x ∈ { , δe } . Lemma 4.6.
Suppose s ≤ δ − − ε . Let N s = max { k : τ k ≤ s } . Then there exists some c , c > such that P ( N s ≥ c δ − ε ) ≤ exp( − c δ − ε ) .Proof. Each journey between R and ∆ and back may take more than δ − with fixed positiveprobability p , independently of one another. Hence the probability in the lemma is bounded bythe probability that a Binomial random variable with parameters c δ − ε and p , is less than δ − ε .Choosing c such that c p >
1, the result follows from straightforward large deviations of binomialrandom variables.We will need to control the discrepancy between X and Z at the beginning of a coupling stage,of the form τ k (for 0 ≤ k ≤ δ − ε ), assuming that σ ( Z τ k ) = • or equivalently that X τ k ∼ ¯ y . Let ussay that this coupling phase succeeds if by the time the walks next hit R or 2∆, the discrepancyhas been reduced to zero.We note that the discrepancy between X and Z is typically accumulated when the two walks hitthe real line; on the other hand they tend to be reduced to zero during a coupling phase, meaninga coupling phase is likely to be successful. However, we will not aim to control the discrepancy ifat any point the coupling phase does not succeed.The key argument will be to say that so long as there has been no unsuccessful coupling phase,the discrepancy at the beginning of any coupling phase is small. To this end, we introduce ρ n thefirst time that the real line has been visited more than n times by either (both) walks. We let ∆ n the (horizontal) discrepancy accumulated by the walks at this n th visit: that is,∆ n = (cid:104) ( X ρ n +1 − X ρ n ) − ( Z ρ n +1 − Z ρ n ); e (cid:105) Note that by construction of the coupling, ∆ n are i.i.d. and centered random variables withexponential moments (each of them of order the mesh size δ ). We then introduce the martingale M n = n (cid:88) i =0 ∆ i which counts the accumulated discrepancy at the n th visit to the real line. If 0 ≤ u ≤ s is a time,let us call n ( u ) the number of visits to R by time u . At the end of a successful coupling phase σ k ,the discrepancy is reduced to zero, so in fact in the future (until the beginning of the next couplingphase at time τ k ), the discrepancy will be of the form M n ( u ) − M n ( σ k ) . Lemma 4.7.
With probability at least − s − ε , we have max ≤ k ≤ N s | X τ k − Z τ k | G k ≤ δs / ε , where G k is the good event that there was no unsuccessful coupling by time σ k . roof. Fix 0 ≤ k ≤ N s . Let j = j ( k ) = max { j ≤ k : the coupling starting at τ j was successful } .Suppose that the event G k holds otherwise there is nothing to prove. Then as observed above, thediscrepancy at time τ k is given by | X τ k − Z τ k | = | M n ( τ k ) − M n ( τ j ) | ≤ n ≤ n ( τ k ) | M n | . By Chebyshev’s inequality and Doob’s maximal inequality, P (cid:16) max ≤ k ≤ δ − ε | X τ k − Z τ k | G k ≥ δs / ε (cid:17) (cid:46) δ s / ε E (cid:16) max n ≤ n ( τ Ns ) | M n | (cid:17) (cid:46) δ s / ε E (cid:16) M n ( τ Ns ) (cid:17) . Now, M n − cδ n is a martingale for some constant c > M , so (since n ( τ N s ) is trivially bounded by s ), E (cid:16) M n ( τ Ns ) (cid:17) = cδ E ( n ( τ N s )) = cδ E ( L R ( s )) , where L R ( s ) denote the number of visits to R by both (either) walks by time s . Since the verticalcoordinate performs a delayed simple random walk on the integers, this is less than the expectednumber of visits to 0 by time s of a one-dimensional walk starting from zero, which is at most (cid:46) √ s . Hence P (cid:16) max ≤ k ≤ δ − ε | X τ k − Z τ k | G k ≤ δs / ε (cid:17) (cid:46) s ε as desired.We now deduce that all coupling phases are successful with high probability. Lemma 4.8.
We have that for ε small enough (fixed), P (cid:16) ∪ N s k =0 G ck (cid:17) (cid:46) s − ε . Proof.
We may work on the event N = { N s (cid:46) δ − ε } and the event D of Lemma 4.7. On N ∩ D theprobability of an unsuccessful coupling starting from time τ k may be bounded as follows. Supposingthat σ ( Z τ k ) = • (or equivalently X τ k ∼ ¯ y ), the walks X and Z start a mirror coupling at time τ k and they are initially spaced by no more than δs / ε , if G k − holds. By the gambler’s ruinestimate, the probability for X to avoid the reflection line until hitting either R or 2∆ is then atmost δs / ε . Hence P ( G ck ; G k − ∩ N ∩ D ) ≤ δs / ε (cid:46) δ / − ε . Summing over k ≤ δ − ε , we get P (cid:16) ∪ N s k =0 G ck ; N ∩ D (cid:17) (cid:46) δ / − ε . We conclude by Lemma 4.7 and Lemma 4.6.
Proof of Proposition 4.3.
We estimate the right hand side of (4.6). For the random variable in theright hand side to be nonzero, it is necessary that:47 X and X (cid:48) did not couple prior to time T R ∧ t/ • one of the G ck occurs for some k ≤ N s ; • and still one of the four walks must visit ¯ y at exactly time t .The first event has probability bounded by (cid:46) / √ t by straightforward gambler’s ruin. The secondhas probability at most 1 /t ε by Lemma 4.8 (since s (cid:16) t ). To bound the probability of the thirdevent, we observe the following: if w ∈ u of the probability tovisit ¯ y at the specific time u is small: Lemma 4.9.
We have sup w ∈ sup u ≥ ˜ p u ( w, ¯ y ) ≤ δ (log 1 /δ ) c , for some c > .Proof. This follows from the facts (already used before, so we will be brief) that if u ≤ δ − / (log 1 /δ ) c then the probability to be at ¯ y at time u is at most exp( − (log 1 /δ ) ) by subdiffusivity, while for u ≥ δ − / (log 1 /δ ) c we have a bound of the form 1 /u thanks to (3.3).All in all, putting these three events together we find (cid:12)(cid:12) ∇ x ˜ p t ( x, o ) − ∇ z ˜ p • t ( z, o ) (cid:12)(cid:12) (cid:46) t − / × t − ε × δ (log 1 /δ ) c Summing over t ∈ [ δ − / log(1 /δ ) c , δ − − ε ] we see that this is at most (log 1 /δ ) c δ ε/ , which issufficient. At this point we may work exclusively with the simple random walk on (2 Z ) × (2 Z ) or the colouredsimple random walk on the same lattice. Let us write P x → ¯ y ; t for the law of a random walk bridge,i.e., the law of a (lazy) simple random walk on (2 Z ) conditioned to go from x to ¯ y in time t .Let ˜ q t ( x, y ) denote the transition probability for (lazy) simple random walk on (2 Z ) . Thennote that ˜ p • t ( x, y ) = ˜ q t ( x, y ) P x → ¯ y ; t ( σ ( X t ) = • ) , where σ ( X t ) is the colour of the process which changes with probability p every time this processtouches the real line. Now, let N denote the number of visits to R and observe that by conditioningon N , P x → ¯ y ; t ( σ ( X t ) = •| N = n ) = 12 ± λ n where λ = 1 − p is the eigenvalue of the 2-state Markov chain which switches state with probability p at each step, and the ± sign depends on the initial colour σ ( X ). Therefore, ∇ x ˜ p • t ( x, ¯ y ) = 12 ∇ x ˜ q t ( x, ¯ y ) ± ∇ x (cid:16) ˜ q t ( x, ¯ y ) E x → ¯ y ; t ( λ N ) (cid:17) . Since (cid:80) ∞ t =0 12 ∇ x ˜ q t ( x, ¯ y ) is by definition the potential kernel of the (lazy) simple random walk ∇ x ˜ b ( x, ¯ y ), to prove Proposition 4.4, as we already observed before, it suffices to show that thereexists (cid:15) (cid:48) > (cid:12)(cid:12)(cid:12) ˜ q t ( x, ¯ y ) E x → ¯ y ; t [ λ N ] − ˜ q t ( x (cid:48) , ¯ y ) E x (cid:48) → ¯ y ; t [ λ N ] (cid:12)(cid:12)(cid:12) (cid:46) t / ε (cid:48) , (4.7)48or t ∈ [ δ − / (log δ ) , δ − − ε ]. We recall first that if 0 ≤ u ≤ t and E ∈ F u = σ ( X , . . . , X u ), then bythe Markov property: P x → ¯ y ; t ( E ) = E x (cid:16) E ˜ q t − u ( X u , ¯ y )˜ q t ( x, ¯ y ) (cid:17) . (4.8)Let T L denote the hitting time of the reflection line bisecting x and x (cid:48) ; and let T R denote the hittingtime of R . We introduce the following bad events: • B = { T R > t − s } , where s = [ t/ (log t ) ] ∧ [ δ − / (log δ ) ]. • B = { T R ≤ t − s } ∩ { T L > T R } ∩ { N t − s/ ≤ (log t ) } , where N u is the number of visits to R by time u .We will first show that both events are highly unlikely. In words, B is unlikely because it requiresgoing to ¯ y in the remaining s units of time starting from above R , which means ¯ y is too far awaycompared to the time remaining. B is unlikely because it requires avoiding the reflection line fora long time (until touching R ) and thereafter making very few visits to R . Lemma 4.10.
For t ∈ [ δ − / (log δ ) , δ − − ε ] , we have P x → ¯ y ; t ( B ) (cid:46) exp( − (log t ) )˜ q t ( x, ¯ y ) − . Proof.
Note that by (4.8), P x → ¯ y ; t ( B ) ≤ E x → ¯ y ; t (cid:16) T R >t − s ˜ q s ( X t − s , ¯ y )˜ q t ( x, ¯ y ) (cid:17) . Now, ˜ q t ( x, ¯ y ) satisfies the Gaussian behaviour ˜ q t ( x, ¯ y ) (cid:16) (1 /t ) exp( − | x − ¯ y | t ) in the range t ≥ δ − / (log δ ) (see Theorem 2.3.11 in [28]). Since | X t − s − ¯ y | (cid:38) δ − when T R > t − s , and since | x − ¯ y | (cid:46) δ − , we deduce that for some constant c > q s ( X t − s , ¯ y ) ≤ exp( − cδ − /s ) ≤ exp( − c (log 1 /δ ) )on the event T R > t − s , where we used that s ≤ δ − / (log δ ) . The desired inequality follows since t ≤ δ − − ε . Lemma 4.11.
For t ∈ [ δ − / (log δ ) , δ − − ε ] , we have P x → ¯ y ; t ( B ) (cid:46) t / ε (cid:48) (log t ) ˜ q t ( x, ¯ y ) − , where ε (cid:48) = − ε ε .Proof. Using (4.8), P x → ¯ y ; t ( B ) ≤ E x (cid:16) { T R Different types of vertices. The black vertices are drawn in red, and the whitevertices are drawn in black. From now on we work in the upper-half plane H with the local (infinite volume) limit µ (dependingon z ) of the free boundary dimer model from Theorem 1.2. We will write µ to denote both theprobability and expectation with respect to µ . Let C be the coupling function , as defined in Corollary 3.9, i.e., the pointwise limit, as n → ∞ ,of the inverse Kasteleyn matrix on G n given in matrix notation by C = − AK ∗ , (5.1)where A ( x, y ) = D ( y,y ) a ( x, y ) is the normalised potential kernel of the infinite volume bulk effective(nonlazy) walk. We write A even and A odd for the restriction of A to the even and odd rowsrespectively. When we unpack (5.1) we find that its meaning is different depending on the respectivetype of the pair of vertices. We denote the black and white vertices in the even and odd rows bythe symbols ◦ , ◦ , × , × respectively as illustrated in Figure 5. Fix v , v two vertices in H ∩ Z .Suppose for instance that v ∈ ◦ and v ∈ ◦ . Then (5.1) says C ( v , v ) = a ( v , v + 1) − a ( v , v − δδx A even ( v , v ) , where the second identity follows from In the following, δδx (resp. δδy ) will denote the discretederivative in the x (resp. y ) direction of the second coordinate of the Green’s function. Note that δδx ∼ δ ∂∂x as δ → . Likewise, if instead we have v ∈ ◦ and v ∈ × , then C ( v , v ) = i δδy A even ( v , v ) . 51e summarise these computations in a table: v \ v v ∈ × v ∈ ◦ v ∈ ◦ v ∈ × v ∈ × δδx A odd i δδy A odd i δδy A odd δδx A odd v ∈ ◦ i δδy A even δδx A even δδx A even i δδy A even Furthermore, when v is in the black lattice ( v ∈ ◦ or v ∈ × ) we obtain the corresponding tablesimply by translation invariance: v \ v v ∈ × v ∈ ◦ v ∈ ◦ v ∈ × v ∈ × δδx A odd i δδy A odd i δδy A odd δδx A odd v ∈ ◦ i δδy A even δδx A even δδx A even i δδy A even Remark 5.1. It is useful to point out that the terms involving mixed colours and those involvingmatching colours behave very differently: indeed, if both v and v are of the same colour, thenthe arguments of the corresponding potential kernel are of different colours. This corresponds toonly considering walks that go through the boundary in the definition of the potential kernel (seeCorollary 4.5).There is a convenient algebraic rewriting of these different values. Suppose v and v are twoarbitrary vertices (of any colour), and let s ( v ) = ( − row v be the signed parity of the row of v . Then we have C ( v , v ) = 14 (cid:20) (1 + s ( v ) s ( v )) δδx + (1 − s ( v ) s ( v )) i δδy (cid:21) × [(1 − s ( v )) A odd ( v , v ) + (1 + s ( v )) A even ( v , v )] . (5.2)We will now combine (5.2) with Corollary 4.5 to obtain the scaling limit of the inverse Kasteleynmatrix in the upper half-plane. Theorem 5.2. Let z and w be two vertices on δ Z ∩ H , and fix ρ > . Then there exists ε > such that uniformly over z (cid:54) = w with min ( (cid:61) ( z ) , (cid:61) ( w )) ≥ ρ , as the mesh size δ → , C ( z, w ) = − δ π (cid:18) s ( z ) s ( w ) 1 z − w + 1¯ z − ¯ w (cid:19) + o ( δ ε ) + O ( δ | z − w | ) if z, w are of different class δ π (cid:18) s ( z ) 1 z − ¯ w + s ( w ) 1¯ z − w (cid:19) + o ( δ ε ) if z, w are of the same class . Proof. We could use the master formula (5.2) but in order to avoid making mistakes it is perhapseasier to consider all the possible cases for the types of vertices z and w using the tables above. Westart with the case when z and w are of different color. We will use the symmetry of the potentialkernel a ( z, w ) = a ( z, w ) and Corollary 4.5 (applied to the case when the arguments of the potentialkernel are of the same color). 52 For z, w with s ( z ) = s ( w ) = − 1, we have C ( z, w ) = δδx A odd ( z, w )= 14 ( a ( w + δ, z ) − a ( w − δ, z ))= 14 × π (cid:60) (cid:18) δw − z (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) = − δπ (cid:60) (cid:18) z − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) . The factor 1 / A odd is normalised by the degree of w which is equalto 4 (see (2.25)). • For z, w with s ( z ) = − s ( w ) = 1, we have C ( z, w ) = i δδy A odd ( z, w )= − i δπ (cid:60) (cid:18) iz − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) = i δπ (cid:61) (cid:18) z − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) . • For z, w with s ( z ) = s ( w ) = 1, since z, w are of different colors, z and w ± δ are of the samecolor. Note that G even is a signed function such that G even ( z, w ) < z, w of differentcolors, and G even ( z, w ) > z, w of the same color. Therefore we have C ( z, w ) = δδx A even ( z, w ) = − δπ (cid:60) (cid:18) z − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) . • For z, w with s ( z ) = 1 and s ( w ) = − z and w ± δi are again of the same color. We have C ( z, w ) = i δδy A even ( z, w )= − i δπ (cid:60) (cid:18) iz − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) = i δπ (cid:61) (cid:18) z − w (cid:19) + o ( δ ε ) + O (cid:18) δ | z − w | (cid:19) . Let us now consider the case where z and w are of different colors, by applying Corollary 4.5(when the arguments are of different colors). • For z, w with s ( z ) = s ( w ) = − 1, we have C ( z, w ) = δδx A odd ( z, w ) = − δπ (cid:60) (cid:18) z − w (cid:19) + o ( δ ε ) , For z, w with s ( z ) = − s ( w ) = 1, we have C ( z, w ) = δδy A odd ( z, w ) = δπ (cid:61) (cid:18) z − w (cid:19) + o ( δ ε ) . • For z, w with s ( z ) = s ( w ) = 1, since z, w are of the same color, z and w ± δ are of differentcolors. We have C ( z, w ) = δδx A even ( z, w ) = δπ (cid:60) (cid:18) z − w (cid:19) + o ( δ ε ) . • For z, w with s ( z ) = 1 and s ( w ) = − z and w ± δi are again of different colors. We have C ( z, w ) = i δδy A odd ( z, w ) = − i δπ (cid:61) (cid:18) z − w (cid:19) + o ( δ ε ) . Combined, we have proved the theorem. In this section we recall basics of Kastelyn theory. In particular we will express local statistics of µ in terms of the coupling function C .Let A be 2 k × k matrix indexed by vertices w , b , . . . , w k , b k of k edges ( w , b ) , . . . , ( w k , b k ).Then a Pfaffian can be expressed as a sum over matchings of these 2 k vertices, in a similar way asthe determinant can be expressed as a sum over permutations. Let M be such a perfect matching.We can write it as ( i , j ) , . . . , ( i k , j k ) where: • in each pair ( i, j ) the i -vertex comes from an edge ( w i , b i ) that is listed before the edge ( w j , b j )which contains the j -vertex. If the two vertices belong to the same edge, then a white vertexcomes before a black vertex, • we require that i < . . . < i k .This defines a permutation π M = (cid:20) . . . k − ki j i j . . . i k j k (cid:21) . Then we have Pf( A ) = (cid:88) M matching sgn( π M ) a i ,j . . . a i k ,j k . (5.3)Based on Kasteleyn’s theorem Kenyon derived the following description of local statistics for thedimer model [20]. Recall that µ denotes the probability measure of the infinite volume free boundarydimer model on H ∩ Z . 54 heorem 5.3. Let E be a set of pairwise distinct edges e = ( w , b ) , . . . , e k = ( w k , b k ) with theconvention that the white vertex comes first. Then µ ( e , . . . , e k ∈ M ) = a E Pf( C ) , where M is the random monomer-dimer configuration under µ , and where C = C ( v , v ) is thecoupling function restricted to the vertices v , v ∈ { w , . . . , w k } ∪ { b , . . . , b k } (implicit here is thefact that the vertices are ordered from black to white, and from to k ), and where a E = k (cid:89) i =1 K ( w i , b i ) is the product of the Kasteleyn weight of each edge, oriented from white to black. To compute the scaling limit of the height function, we will need to study the centered dimer-dimer correlations. When expanding the Pfaffian into matchings, this leads to a simplificationwhich is the analogue of Lemma 21 in [21]: Lemma 5.4. In the setting as above, we have µ [( { e ∈M} − µ ( e ∈ M )) . . . ( { e k ∈M} − µ ( e k ∈ M ))]= a E (cid:88) M restricted matching sgn( π M ) (cid:89) { u,v }∈ Mu 1) then the outgoing edge of e i emanates from the black (resp. white) vertex of e i . This choice implies that the directed M -edgecorresponding to the pair ( i, j ) such that σ ( i ) = j points to the white (resp. black) vertex of e j if ν j = +1 (resp. ν j = − {− , } k → DM σ (5.5)is clearly a bijection.We can now rewrite the truncated correlation function from Lemma 5.4 as follows: µ (cid:2) ( { e ∈M} − µ ( e ∈ M )) . . . ( { e k ∈M} − µ ( e k ∈ M )) (cid:3) = a E (cid:88) σ ∈ S ∗ k (cid:88) M ∈ DM σ sgn( π M ) 12 n (cid:89) ( u,v ) ∈ M C ( u, v )( − u>v (5.6)where n = n ( σ ) is, as above, the number of cycles of σ . To explain (5.6), we simply recall that eachundirected matching that can be oriented as to be compatible with σ corresponds to 2 n directedmatchings by choosing the orientation of each cycle arbitrarily. The factor ( − u>v comes fromthe fact that C ( u, v ) is antisymmetric and that we always have i l < j l in the expansion of thePfaffian as a sum over matchings (5.3). Here u > v means that u comes later than v in the orderdefined by w , b , . . . , w k , b k .We will later need the following lemma. What is specifically interesting to us in the expressionbelow is that the right hand side depends very little on the permutation σ , given the signs ν .56 igure 6: Graphical representation of the directed matching M = { ( b , w ) , ( b , w ) , ( b , w ) , ( b , w ) } corresponding to the cyclic permutation σ = 1 → → → → and signs ν = (1 , , , . We have sgn( π M ) = − since the number of arc crossings is odd Figure 7: The induction step from the proof of Lemma 5.5 Lemma 5.5. Let m be the restricted directed matching compatible with σ ∈ S ∗ k and encoded by ν ∈ {− , } k by the map (5.5) . We have (cid:89) ( u,v ) ∈ m ( − u>v sgn( π M ) = ( − n k (cid:89) i =1 ν i , (5.7) where n = n ( σ ) is the (total) number of cycles in σ .Proof. Mark the vertices w , b , . . . , w k , b k in the order from left to right on the real line as inLemma 5.4 and recall formula (5.4). Note that if we flip exactly one sign ν i , then both sides of(5.7) change sign since the parity of the number of crossings between arcs changes (we either crossor uncross the arcs ending at e i and we do not change the number of crossings for other pairs ofarcs), and since the number of decreasing edges ( u, v ) of m , i.e., satisfying u > v , does not change.We can hence assume that ν i = +1 for all i .Equipped with the graphical representation as in Lemma 5.4 we proceed by induction on k .One can check that the statement is true for k = 2. We therefore assume that k > 2. Let m be adirected restricted matching on e , . . . , e k , and let σ ∈ S ∗ k be the permutation associated with M .Let i, j be such that σ ( i ) = k and σ ( k ) = j . Consider a graphical representation of m . Imagineinfinitesimally deforming the path composed of the arcs connecting e i to e k and e k to e j togetherwith the line segment representing e k in such a way that the path is fully contained in H . This pathhence becomes an arc (modulo a possible self-crossing) representing an m (cid:48) -edge ( b i , w j ), where m (cid:48) is a directed restricted matching on e , . . . , e k − . Let σ (cid:48) ∈ S k − be the permutation associated to m (cid:48) . Note that σ (cid:48) has the same number of cycles as σ .57n this transformation we replaced an increasing edge ( b i , w k ) and a decreasing edge ( b k , w j ) bythe edge ( b i , w j ). For topological reasons, the deformed path representing the m (cid:48) -edge ( b i , w j ) hasa self-crossing if and only if ( b i , w j ) is an increasing edge (see Figure 7). To finish the proof we use(5.4) to evaluate and compare (5.7) for m and m (cid:48) , and we use the induction assumption. In this section we compute the scaling limit of the pointwise moments of the height function on δ Z ∩ H , which is the penultimate step in establishing its convergence as a random distribution.We fix k ≥ 1, and 2 k faces a , b , . . . , a k , b k of δ Z ∩ H . We consider disjoint paths γ i in the duallattice ( δ Z ∩ H ) ∗ connecting a i to b i for 1 ≤ i ≤ k . The following is the analogue of Proposition 20in [21]. Let D denote the minimal distance in the complex plane between any pair of points within { a i , b i } ≤ i ≤ k . Proposition 5.6. Let k ≥ . Let ρ > be fixed and let β > be sufficiently small (possiblydepending on k ). As δ → , (cid:12)(cid:12)(cid:12) µ (cid:104) ( h δ ( a ) − h δ ( b )) · · · ( h δ ( a k ) − h δ ( b k )) (cid:105) − (5.8) (cid:88) m ∈M (1 ,...,k ) (cid:89) ( i,j ) ∈ m − π (cid:60) log ( a i − a j )( b i − b j )(¯ a i − a j )(¯ b i − b j )( a i − b j )( b i − a j )(¯ a i − b j )(¯ b i − a j ) (cid:12)(cid:12)(cid:12) → , (5.9) uniformly over the choice of a , b , . . . , a k , b k such that D ≥ δ β and min ≤ i ≤ k ( (cid:61) ( a i ) , (cid:61) ( b i )) ≥ ρ .Proof. As in Kenyon [21] we can assume without loss of generality that the paths γ i are piecewiseparallel to the axes and that each straight portion is of even length. In this way, we can pair theedges of a straight portion of the path in groups of two consecutive edges. In order to distinguishbetween the two edges in a given pair it will be useful to have a notation which emphasises thisdifference, and following the notations of Kenyon we will call a generic pair of edges α and β respectively; an α -edge will have a black vertex on the right while a β -edge will have a black vertexon its left. The point is that considering their contributions together will lead to cancellations thatare crucial in the computation. Also, in this way the contribution from a pair of edges does notdepend anymore on the microscopic types of its vertices and has a scaling limit which depends onlythe macroscopic position.Let α it (resp. β it ) be the indicator that the t -th α -edge (resp. β -edge) in the path γ i is presentin the dimer cover, minus its expectation. In this way due to the definition of the height functionand the choice of reference flow, h ( a i ) − h ( b i ) = (cid:88) t α it − β it . (Note we do not have a factor 4 as in Kenyon because our choice of reference is slightly differentin order to deal directly with a centered height function: more precisely, the total flow out of avertex is one instead of four in Kenyon’s work [21]). We are ignoring here possibly one term on theboundary if the faces a i and b i do not have the correct parity; but in any case it is clear that thecontribution of a single term in such a sum is of order O ( δ ) and so can be ignored in what follows.We therefore have µ [( h δ ( a ) − h δ ( b )) · · · ( h δ ( a k ) − h δ ( b k ))] = (cid:88) t ,...,t k µ [( α t − β t ) · · · ( α kt k − β kt k )] . (5.10)58e fix a choice of t i s and analyse this product. We first expand this product into a sum of 2 k termscontaining for each i a term which is either α it i or − β it i . Consider for simplicity the term containingall of the α it i . Write w i , b i for the white and black vertices of the edge corresponding to α it i . Let E be the set of edges ( w , b ) , . . . , ( w k , b k ) and let a E = (cid:81) e ∈ E K ( e ). Then by (5.6) we have µ ( α t . . . α kt k ) = a E (cid:88) σ ∈ S ∗ k (cid:88) m ∈ DM σ sgn( m ) 12 n (cid:89) ( u,v ) ∈ m C ( u, v )( − u>v . We rewrite the sum over directed matchings m ∈ DM σ as a sum over ( ν i ) ≤ i ≤ k using (5.5), and get(writing m for the unique directed matching determined by σ and ν = ( ν i ) ≤ i ≤ k ∈ {− , } k ), a E (cid:88) ν (cid:88) σ ∈ S ∗ k sgn( m ) 12 n (cid:89) ( u,v ) ∈ m C ( u, v )( − u>v = a E (cid:88) ν ( k (cid:89) i =1 ν i ) (cid:88) σ ∈ S ∗ k ( − n n (cid:89) ( u,v ) ∈ m C ( u, v )using Lemma 5.5.Fix ν and σ (i.e., we fix a directed matching m ) and use Theorem 5.2 to approximate C ( u, v ).Let ( u, v ) ∈ m and let ν and ν (cid:48) be the respective values of the variables ν i associated with thetwo edges containing the vertices u and v . Note that if νν (cid:48) = 1 then u and v must be of differentcolours and so we fall in case 2 of the approximation given by Theorem 5.2, while if νν (cid:48) = − u and v are of the same colour and so we fall in the first case of this approximation. Hence we get C ( u, v ) = − δ π (cid:104) s ( u ) s ( v ) u − v + u − ¯ v (cid:105) + o ( δ ε ) + O ( δ | u − v | ) if νν (cid:48) = 1 δ π (cid:104) s ( u ) u − ¯ v + s ( v ) u − v (cid:105) + o ( δ ε ) if νν (cid:48) = − . The terms o ( δ ε ) here are uniform on u, v (subject only to the imaginary parts being ≥ ρ ). Since | u − v | ≥ D ≥ δ β , we see that O ( δ | u − v | ) ≤ O ( δ − β ) = o ( δ ε ) if β is sufficiently small. We canthus absorb the term O ( δ | u − v | ) into the term o ( δ ε ) under our assumptions on D .We expand (cid:81) ( u,v ) ∈ m C ( u, v ) using the above formula. This gives us another sum of 2 k terms,which we view as a polynomial in the variables s ( w i ) , s ( b i ). We group the terms by their monomials;and since s ( z ) = 1 for any z , these monomials can only be of degree at most one in each variable.We now claim that any monomial such that s ( b i ) appears but not s ( w i ) for some 1 ≤ i ≤ k , or vice-versa, will contribute o ( δ ) when we take into account the equivalent term coming from the sameexpansion where α it i has been replaced by − β it i . Indeed, since σ and ν have been fixed, considerwhat happens when α it i is replaced by − β it i : • There is a − sign coming from the change α it i → − β it i . • The sign of a E changes by − • Crucially both s ( b i ) and s ( w i ) change. • Yet the coefficients accompanying s ( b i ) and s ( w i ) (both of which are terms of the form z − w + o ( δ ) , . . . or z − ¯ w + o ( δ )) do not change in the scaling limit, since this term is determinedonly by the choice of ν , which is fixed. 59s a consequence, as we sum over all choices of α and β in the 2 k terms of (5.10), and we expandin terms of monomials as described above, we only keep terms that contain for each 1 ≤ i ≤ k either, simultaneously s ( b i ) and s ( w i ), or neither of them.As it turns out, given σ and ν , only very few terms do not cancel out. In fact, for each cycle of σ there will be only two terms. For example, consider the case k = 4, σ = (1234) a four-cycle, and ν = + − ++. This means we are expanding C ( b , b ) C ( w , w ) C ( b , w ) C ( b , w ) . Letting z i be the point in the middle of the edge ( b i , w i ), the expansion looks like δ π (cid:20) s ( b ) 1 z − ¯ z + s ( b ) 1¯ z − z (cid:21) × δ π (cid:20) s ( w ) 1 z − ¯ z + s ( w ) 1¯ z − z (cid:21) × ( − δ π ) (cid:20) s ( b ) s ( w ) 1 z − z + 1¯ z − ¯ z (cid:21) × ( − δ π ) (cid:20) s ( b ) s ( w ) 1 z − z + 1¯ z − ¯ z (cid:21) + o ( δ ε /D ) . The only terms that survive this expansion with the above requirements are the monomials cor-responding to s ( b ) s ( w ) s ( b ) s ( w ) s ( b ) s ( w ) and s ( b ) s ( w ): indeed, choosing or not the termcontaining s ( b ) in the first line imposes a choice on every other line, which is why just two termssurvive this expansion.Furthermore, crucially, in the corresponding coefficients of the surviving monomials, the vari-ables z i or ¯ z i occurs exactly twice, either twice in the type z i or twice in the type ¯ z i (but never ina mixed fashion). For instance, in the above example, the coefficient will involve either z , ¯ z , z , z or the other way around: ¯ z , z , ¯ z , ¯ z . Note that the dependence on z i or ¯ z i is consistent with thechoice of signs coming from ν : more precisely, for z ∈ C and ε = ± 1, define z ε to be z if ε = +1and ¯ z if ε = − 1. Then for a cyclic permutation σ , the two monomials which survive the expansionhave a coefficient proportional to k (cid:89) i =1 z ν i i − z ν σ ( i ) σ ( i ) and k (cid:89) i =1 z ν i i − z ν σ ( i ) σ ( i ) and a similar property holds for a general permutation σ by considering each of its cycles separately.Note furthermore that each such coefficient comes with a factor ± ( δ/ π ) k × k : indeed, whena monomial survives it arises exactly once in each of the 2 k terms from the α − β expansion (5.10).The sign itself is determined purely by the parity of the cycle of the permutation: indeed, for aneven length cycle the number of times the colour changes as we follow the directed matching mustbe even; while it must be odd for an odd length cycle.Suppose C = { c , . . . , c n } is the cycle structure of σ . We will use variables ( ε c ) c ∈ C ∈ {− , } n to denote which type of monomials we consider. Thus the right hand side of (5.10) (still for a fixedchoice of t i ’s) becomes= aδ k (cid:88) ν ( k (cid:89) i =1 ν i ) (cid:88) σ ∈ S ∗ k ( − n n (cid:88) ε k (cid:89) i =1 π z ν i ε c ( i ) i − z ν σ ( i ) ε c ( σ ( i )) σ ( i ) [ s ( b i ) s ( w i )] (1+ ε c ( i ) ν i ) / + o ( δ k + ε /D k )(5.11)60here c ( i ) is the cycle containing i .We now claim that if σ ∈ S ∗ k has any cycle c of length | c | > k is odd and σ is a cyclicpermutation of length k . Then apply the bijection ν → − ν and ε → − ε to find that all the termsare unchanged except for a negative sign coming from (cid:81) ki =1 ν i . Hence this contribution must beequal to zero, and a similar argument can easily be made when σ contains a cycle of odd length.In particular, k itself must be even for the contribution to be nonzero. To get rid of permutationscontaining cycles of even length > 2, we will rely on the following lemma. Lemma 5.7. Let k > be even and let ( x i ) ≤ i ≤ k be pairwise distinct complex numbers. Let C k bethe set of cyclic permutation of length k . Then (cid:88) σ ∈ C k k (cid:89) i =1 x i − x σ ( i ) = 0 . Note in particular that it follows from Lemma 5.7 that if A is the matrix A ij = 1 i (cid:54) = j / ( x i − x j )then det( A ) can be written as a sum over matchings (which can be thought of as permutationswith no fixed points and where each cycle has length 2):det( A ) = (cid:88) m (cid:89) ( u,v ) ∈ m x u − x v ) (5.12)which is Lemma 3.1 of Kenyon [22]. Proof of Lemma 5.7. First of all, the case k = 4 must be true because of (5.12) (note that the oddcycles clearly give a zero contribution to the determinant by an argument similar to the above).Using again (5.12) but for k = 6 gives the desired identity for k = 6, since the terms corre-sponding to C in the expansion of the determinant into permutations contribute zero by the case k = 4. Proceeding by induction, we deduce the result for every even k ≥ n is necessarily k/ 2. Note also that in a two-cycle we geta term of the form C ( z, w ) and another one of the form C ( w, z ) = − C ( z, w ), which results in aterm of the form − C ( z, w ) . Hence the moment (5.11) becomes aδ k (cid:88) ν ( (cid:89) ν i ) (cid:88) m ∈M (1 ,...,k ) ( − ) k/ ×× (cid:89) ( i,j ) ∈ m − (cid:34) ( s ( b i ) s ( w i )) (1+ ν i ) / ( s ( b j ) s ( w j )) (1+ ν j ) / π ( z ν i i − z ν j j ) + ( s ( b i ) s ( w i )) (1 − ν i ) / ( s ( b j ) s ( w j )) (1 − ν j ) / π (¯ z ν i i − ¯ z ν j j ) (cid:35) + o ( δ k + ε /D k )(5.13)where M (1 , . . . , k ) are the matchings of 1 , . . . , k .Recall that in the above expression s ( b i ) and s ( b j ) refer to the sign (parity) of the white andblack vertex respectively of the α -edge in position t i of the path γ i (we have already accountedfor the corresponding β -edge). We will now sum over i and interpret the corresponding sums asdiscrete Riemann sums converging to integrals. For a horizontal edge ( w i , b i ), we have s ( b i ) s ( w i ) = 161hereas it is − s measures the parity of the row. Weclaim that (as in Kenyon’s proof of Proposition 20 in [21], see the equation between (20) and (21)),2 δ ( s ( b i ) s ( w i )) (1+ ν i ) / ν i K ( w i , b i ) = − iδz ν i i . (5.14)Indeed, suppose for instance that γ i moves horizontally from left to right in step t i . Then the corre-sponding α -edge is vertical, and has a black vertex at the bottom so K ( w i , b i ) = + i . Furthermore, δz i = δ ¯ z i = 2 δ (since one step of the path corresponds to two faces of length δ each). The verticalcases can be checked similarly (keeping in mind the corresponding values of δz i and δ ¯ z i ).From (5.14), we can multiply by ν i both sides of the equation and take the product over i .Then, recalling that a = (cid:81) i K ( w i , b i ), and observing also that the second term in each bracketof the right hand side of (5.13) is the same as the first term but with ν i replaced by − ν i and ν j replaced by − ν j , (5.13) becomes (cid:88) ν (cid:88) m ∈M (1 ,...,k ) ( − k k/ × (cid:89) ( i,j ) ∈ m − (cid:34) δz ν i i δz ν j j π ( z ν i i − z ν j j ) + δz − ν i i δz − ν j j π ( z − ν i i − z − ν j j ) (cid:35) + o ( δ k + ε /D k ) (5.15)(We have kept a term ( − k even though k is even to indicate that this comes from ( − k/ at thetop of (5.13) and a factor − k/ − k/ − i (squared) in the right hand side of (5.14).) Fixing the matching andsumming over ν (so exchanging order of summation) we get( − k/ k/ (cid:88) m ∈M (1 ,...,k ) (cid:89) ( i,j ) ∈ m (cid:20) δz i δz j π ( z i − z j ) + δ ¯ z i δ ¯ z j π (¯ z i − ¯ z j ) + δ ¯ z i δz j π (¯ z i − z j ) + δz i δ ¯ z j π ( z i − ¯ z j ) (cid:21) + o ( δ k + ε /D k )(5.16)(The term ( − k/ in front comes from the previous ( − k in (5.15) and another factor ( − 1) ineach of the k/ t i in (5.10),and since k is even (so ( − k/ = ( − k/ ), we obtain µ [( h δ ( a ) − h δ ( b )) · · · ( h δ ( a k ) − h δ ( b k ))] =( − k/ k (cid:88) m ∈M (1 ,...,k ) (cid:89) ( i,j ) ∈ m (cid:90) γ i (cid:90) γ j (cid:104) d z i d z j π ( z i − z j ) + d¯ z i d¯ z j π (¯ z i − ¯ z j ) + d¯ z i d z j π (¯ z i − z j ) + d z i d¯ z j π ( z i − ¯ z j ) + O (cid:0) δD (cid:1)(cid:105) + o (cid:0) δ ε D k (cid:1) . (5.17)To understand the bound on the error above, the term outside of the brackets corresponds to sum-ming the error in (5.15) over k paths (each of length at most O ( δ − )); the term inside correspondsto approximating a Riemann sum by an integral. When we do so, for each sum/integral, we makean error of size at most O ( δ/D ) | sup f (cid:48) | , since each path is at least of length D/δ , and f is the func-tion being integrated, so that here sup | f (cid:48) | = O ( D − ). Furthermore, as these are double integrals,we need to multiply this error for a single integral by the overall value the other integral which webound crudely by O (1 /D ).Now observe that (cid:90) γ i (cid:90) γ j d z i d z j ( z i − z j ) = log ( a i − a j )( b i − b j )( a i − b j )( b i − a j )62oting that the four integrals give two pairs of conjugate complex numbers, and recalling that x + ¯ x = 2 (cid:60) ( x ), we obtain µ [( h δ ( a ) − h δ ( b )) · · · ( h δ ( a k ) − h δ ( b k ))] (5.18)= (cid:88) m ∈M (1 ,...,k ) (cid:89) ( i,j ) ∈ m − π (cid:60) log ( a i − a j )( b i − b j )(¯ a i − a j )(¯ b i − b j )( a i − b j )( b i − a j )(¯ a i − b j )(¯ b i − a j ) + err.where err. = o (cid:18) δ ε D k (cid:19) + O (cid:18) δD (cid:19) O (log D ) k/ − = o ( δ ε − kβ )for β sufficiently small, since D ≥ δ β . In particular, if β is sufficiently small (depending on k but not on anything else) then this error goes to zero as δ → 0. This concludes the proof ofProposition 5.6. In this section we finish the proof of the scaling limit result from Theorem 1.3.One can think of the height function on Z ∩ H (which is defined up to a constant) as a randomdistribution (generalised function) acting on bounded test functions f with compact support andmean zero. We follow [5] and write the action as( h δ , f ) = (cid:90) H (cid:90) H ( h δ ( a ) − h δ ( b )) f + ( a ) f − ( b ) Z f dadb, (5.19)where f ± = max {± f, } and Z f = (cid:82) H f + ( a ) da = (cid:82) H f − ( a ) da . Note that this is well defined asthe additive indeterminate constant in h δ cancels out in this expression. One can also check thatthis gives the same result as just integrating the height function against f . Note that by Fubini’stheorem ( h δ , f ) is centered as µ ( h δ ( a ) − h δ ( b )) = 0 for all a, b ∈ H by our choice of the referenceflow from Section 1.1.The result in Proposition 5.6 is the key step to prove the main result of the paper, which werephrase below for convenience. In fact, in [21], no further justification beyond the analogue ofProposition 5.6 is provided (this is also the case in [34]). The fact that an argument is missing wasalready pointed out by de Tili`ere in [9] (see Lemma 20 in that paper). Here, we follow an approachsimilar to the one used in [27] and in Toninelli’s lecture notes [39] (see in particular Theorem 5.4 andthe following discussion), but tailored to our setup since our a priori error estimates are somewhatdifferent. Theorem 5.8. Let Φ Neu H be the Neumann Gaussian free field in H , and let f , . . . f k ∈ D ( H ) (smooth test functions of compact support and mean zero). Then for l , . . . , l k ∈ N , µ (cid:104) k (cid:89) i =1 ( h δ , f i ) l i (cid:105) → E (cid:104) k (cid:89) i =1 ( √ π Φ Neu H , f i ) l i (cid:105) , as δ → , where E is the expectation associated with Φ Neu H . roof. For a function g , we write g ( a ; b ) = g ( a ) − g ( b ). To simplify the exposition, we only treatthe case of the second moment; the other cases are similar but with heavier notation. To startwith, note that µ [( h δ , f )( h δ , f )] = (cid:90) H µ [ h δ ( a ; b ) h δ ( a ; b )] f +1 ( a ) f − ( b ) Z f f +2 ( a ) f − ( b ) Z f da db da db . (5.20)Let ρ > (cid:61) ( z ) ≥ ρ whenever z ∈ Supp( f ) ∪ Supp( f ), and let H ( a , b , a , b ) = − π (cid:60) log ( a − a )( b − b )(¯ a − a )(¯ b − b )( a − b )( b − a )(¯ a − b )(¯ b − a ) . By Proposition 5.6, since all relevant points have imaginary parts greater than ρ , we have µ [ h δ ( a ; b ) h δ ( a ; b )] = H ( a , b , a , b ) + o (1) , (5.21)where the error o (1) is uniform over D δ := { ( a , b , a , b ) ∈ H : D ≥ δ β } , where, as before, D = D ( a , b , a , b ) denotes the minimal distance in the complex plane betweenany pair of points within { a , b , a , b } . We now split the integral in (5.20) into the integral over D δ and over D cδ . Now the important observation is that since the the error is uniform, the limit ofthe integral over D δ is given by (cid:90) H H ( a , . . . , b ) f +1 ( a ) f − ( b ) Z f f +2 ( a ) f − ( b ) Z f da db da db = 12 π E [(Φ , f )(Φ , f )] . Therefore, we are left with proving that the contribution to the integral (5.20) coming from D cδ is negligible. To do that, we proceed somewhat crudely, noting that by Cauchy–Schwarz, µ [ h δ ( a ; b ) h δ ( a ; b )] ≤ (cid:16) Var µ ( h δ ( a ; b )) Var µ ( h δ ( a ; b )) (cid:17) / . (5.22)Since the volume of D cδ is polynomially small in δ , it will therefore suffice to show thatVar µ ( h δ ( a ; b )) = O (log δ ) C (5.23)for some C > a, b within some fixed compact of H . To prove this, we will goback to the definition of the height function as a sum of increments over a path, and we will useour a priori bound on K − coming from Proposition 3.4, which gives K − ( u, v ) = O (cid:16) (log dist( u, v )) C u, v ) (cid:17) . Fix two paths γ , γ from a to b . It will be advantageous to take these paths at positive macroscopicdistance from one another except near the endpoints, where they must necessarily come together.64e will explain more precisely below how we construct them. By the triangle inequality andTheorem 5.3, we haveVar µ ( h δ ( a ; b )) ≤ (cid:88) e ∈ γ ,e ∈ γ | Cov µ ( { e ∈M} , { e ∈M} ) | (cid:46) (cid:88) e ∈ γ ,e ∈ γ (cid:16) (log dist( e , e )) C e , e ) (cid:17) (cid:46) (log δ ) C (cid:88) e ∈ γ ,e ∈ γ (cid:16) e , e ) (cid:17) . We may assume that near a and b , the paths γ and γ form straight segments with differentdirections (say opposite directions), until they reach a fixed positive distance α , taken to be smallenough that these paths remain at positive distance from the real line. The segments near a and b are then joined by portions of paths staying at distance (in the plane) at least α/ γ and γ . Then (cid:88) e ∈ γ ,e ∈ γ (cid:16) e , e ) (cid:17) ≤ O (1 /δ ) (cid:88) r =1 r { ( e , e ) : dist( e , e ) = r } (cid:46) α/ (2 δ ) (cid:88) r = R r r + O (1) ≤ log( δ − ) . (5.24)We provide a brief explanation for the crucial point above, which is the bound on { e , e :dist( e , e ) = r } , and which comes from the choice of paths γ and γ . Indeed, if 1 ≤ r ≤ α/ e in the segment of γ at distance at most r from a , will give at most one corresponding point e on γ such thatdist( e , e ) = r . On the other hand, for r ≥ α/ 2, there are at most O ( δ − ) pairs of edges on thewhole path, so { ( e , e ) : dist( e , e ) ≥ α/ (2 δ ) } = O ( δ − ), so the contribution of such edges tothe sum is indeed O (1) as claimed above.This proves (5.23) and therefore completes the proof of Theorem 5.8 in the case of secondmoments. In the general case of a moment of order k ≥ 2, the same proof works, where wereplace the use of Cauchy–Schwarz in (5.22) by a H¨older inequality, so that it suffices to show that µ [( h δ ( a ; b )) k ] (cid:46) (log 1 /δ ) C k (we wrote here a moment of order 2 k rather than k to account for thepossibility that k is odd). We therefore need to choose 2 k paths leading from a to b . As above,these paths may be chosen as being straight line segments up to a small distance α (in the plane)away from a or b , and with distinct directions; we simply choose the angles between these segmentsto be π/k , and otherwise require that these paths stay at positive distance from one another. It iseasy to check that the analogue of (5.24) holds also in this case.A standard argument says that since all moments of h δ converge to the corresponding momentsof √ π Φ Neu H , and since Φ Neu H is a Gaussian process, we can conclude that h δ → √ π Φ Neu H in distri-bution as δ → eferences [1] J. Aru, A. Sep´ulveda, and W. Werner. On bounded-type thin local sets of the two-dimensionalgaussian free field. Journal of the Institute of Mathematics of Jussieu , 18(3):591–618, 2019.[2] M. T. Barlow. Random walks and heat kernels on graphs , volume 438. Cambridge UniversityPress, 2017.[3] M. Basok and D. Chelkak. Tau-functions `a la dub´edat and probabilities of cylindrical eventsfor double-dimers and CLE(4). arXiv preprint arXiv:1809.00690 , 2018.[4] N. Berestycki, B. Laslier, and G. Ray. The dimer model on Riemann surfaces, I. arXiv, 2020.[5] N. Berestycki, B. Laslier, and G. Ray. Dimers and imaginary geometry. The Annals ofProbability , 48(1):1–52, 2020.[6] N. Berestycki, B. Laslier, and M. Russkikh. An imaginary geometry perspective on piecewisetemperleyan dimers. In preparation, 2016+.[7] N. Berestycki and E. Powell. Gaussian free field, multiplicative chaos and Liouville quantumgravity. to appear , 2020.[8] C. Boutillier and B. De Tili`ere. Height representation of XOR-Ising loops via bipartite dimers. Electronic Journal of Probability , 19, 2014.[9] B. de Tili`ere. Scaling limit of isoradial dimer models and the case of triangular quadri-tilings. Annales de l’IHP Probabilit´es et statistiques , 43(6):729–750, 2007.[10] J. Dub´edat. Exact bosonization of the Ising model. arXiv preprint arXiv:1112.4399 , 2011.[11] J. Dub´edat. Double dimers, conformal loop ensembles and isomonodromic deformations. Jour-nal of the European Mathematical Society , 21(1):1–54, 2018.[12] H. Duminil-Copin and M. Lis. On the double random current nesting field. Probability Theoryand Related Fields , 175(3-4):937–955, 2019.[13] M. Folz. Gaussian upper bounds for heat kernels of continuous time simple random walks. Electronic Journal of Probability , 16:1693–1722, 2011.[14] D. A. Freedman. On tail probabilities for martingales. the Annals of Probability , pages 100–118,1975.[15] A. Giuliani, I. Jauslin, and E. H. Lieb. A Pfaffian Formula for Monomer–Dimer PartitionFunctions. Journal of Statistical Physics , 163(2):211–238, Apr 2016.[16] A. Grigor’yan. Gaussian upper bounds for the heat kernel on arbitrary manifolds. J. Diff.Geom. , 45:33–52, 1997.[17] O. J. Heilmann and E. H. Lieb. Monomers and dimers. Physical Review Letters , 24(25):1412,1970. 6618] O. J. Heilmann and E. H. Lieb. Theory of monomer-dimer systems. Communications inMathematical Physics , 25(3):190–232, 1972.[19] M. Jerrum. Two-dimensional monomer-dimer systems are computationally intractable. Jour-nal of Statistical Physics , 48(1-2):121–134, 1987.[20] R. Kenyon. Local statistics of lattice dimers. Annales de l’Institut Henri Poincare (B) Proba-bility and Statistics , 33(5):591 – 618, 1997.[21] R. Kenyon. Conformal invariance of domino tiling. The Annals of Probability , pages 759–795,2000.[22] R. Kenyon. Dominos and the Gaussian free field. The Annals of Probability , 29(3):1128–1137,2001.[23] R. Kenyon. Conformal invariance of loops in the double-dimer model. Communications inMathematical Physics , 326(2):477–497, 2014.[24] R. Kenyon and D. Wilson. Boundary partitions in trees and dimers. Transactions of theAmerican Mathematical Society , 363(3):1325–1364, 2011.[25] R. W. Kenyon, J. G. Propp, and D. B. Wilson. Trees and matchings. Electron. J. Combin. ,7, 2000.[26] R. W. Kenyon and S. Sheffield. Dimers, tilings and trees. J. Combin. Theory Ser. B , 92(2):295–317, 2004.[27] B. Laslier and F. L. Toninelli. Lozenge tilings, glauber dynamics and macroscopic shape. Communications in Mathematical Physics , 338(3):1287–1326, 2015.[28] G. F. Lawler and V. Limic. Random walk: a modern introduction , volume 123 of CambridgeStudies in Advanced Mathematics . Cambridge University Press, Cambridge, 2010.[29] R. Lyons and Y. Peres. Probability on trees and networks , volume 42. Cambridge UniversityPress, 2017.[30] J. Miller, S. Sheffield, and W. Werner. CLE percolations. Forum Math. Pi , 5:e4, 102, 2017.[31] J. K. Percus. One more technique for the dimer problem. Journal of Mathematical Physics ,10(10):1881–1884, 1969.[32] S. Popov. Two-Dimensional Random Walk: From Path Counting to Random Interlacements. Cambridge University Press.[33] W. Qian and W. Werner. Coupling the Gaussian free fields with free and with zero boundaryconditions via common level lines. Communications in Mathematical Physics , 361(1):53–80,2018.[34] M. Russkikh. Dimers in piecewise temperleyan domains. Communications in MathematicalPhysics , 359(1):189–222, 2018. 6735] S. Sheffield. Exploration trees and conformal loop ensembles. Duke Mathematical Journal ,147(1):79–129, 2009.[36] S. Sheffield and W. Werner. Conformal loop ensembles: the Markovian characterization andthe loop-soup construction. Annals of Mathematics , pages 1827–1917, 2012.[37] S. Smirnov. Conformal invariance in random cluster models. I. Holomorphic fermions in theIsing model. Ann. of Math. (2) , 172(2):1435–1467, 2010.[38] W. P. Thurston. Conway’s tiling groups.