Heat kernel for Liouville Brownian motion and Liouville graph distance
HHeat kernel for Liouville Brownian motion andLiouville graph distance
Jian Ding ∗ University of Pennsylvania Ofer Zeitouni † Weizmann InstituteCourant Institute Fuxi Zhang ‡ Peking UniversityJune 28, 2018
Abstract
We show the existence of the scaling exponent χ ∈ (0 , γ / − (cid:112) γ / /γ ] ofthe graph distance associated with subcritical two-dimensional Liouville quantum gravity ofparamater γ < V = [0 , . We also show that the Liouville heat kernel satisfies, for anyfixed u, v ∈ V o , the short time estimateslim t → log | log p γt ( u, v ) || log t | = χ − χ , a.s. Let V = [0 , ⊆ R and let V o denote its interior. Let h be an instance of the Gaussian freefield (GFF) on V with Dirichlet boundary condition. For an introduction to the theory of the GFFincluding various formal constructions, see, e.g., [36, 5]. Fix γ ∈ (0 ,
2) and let M γ denote the γ Liouville quantum gravity (LQG) given by formally exponentiating the GFF h [17] . One can thenintroduce the positive continuous additive functional (PCAF) with respect to M γ as F ( t ) := (cid:90) t e γh ( X s ) − γ E h ( X s ) ds, (1)where { X t } denotes a standard Brownian motion (SBM) on V killed upon exiting V , independentof h . The Liouville Brownian motion (LBM) is then defined formally as Y t := X F − ( t ) , and theLiouville heat kernel (LHK) p γt ( x, y ) is the density of the Liouville semigroup with respect to M γ ,i.e. E x f ( Y t ) = (cid:90) p γt ( x, y ) f ( y ) M γ ( dy ) , (2) ∗ Partially supported by an NSF grant DMS-1757479, an Alfred Sloan fellowship, and NSF of China 11628101. † Partially supported by the ERC advanced grant LogCorrelatedFields and by the Herman P. Taubman professorialchair at the Weizmann Institute. ‡ Partially supported by NSF of China 11771027. Thus, in our terminology, the LQG is the Gaussian Multiplicative Chaos (GMC) built from the Gaussianfree field. As pointed out to us by Remi Rhodes, in the physics literature the LQG is often meant to represent amodification of this measure, e.g. by normalizition with respect to the total mass of the GMC. In this paper wefollow the terminology established in [17], and only note that global, absolutely continuous modifications such as anormalization by the area would not change the value of the exponents in Theorem 1.1 below. a r X i v : . [ m a t h . P R ] J u l here the superscript x is to recall that Y = X = x . We refer to Section 2 for pointers to the(non-trivial) precise construction and properties of these objects.For δ > u, v ∈ V o , we define the Liouville graph distance D γ,δ ( u, v )to be the minimal number of Euclidean balls with rational centers and LQG measure at most δ ,whose union contains a path from u to v . Theorem 1.1.
Fix γ ∈ (0 , . There exists χ = χ ( γ ) ∈ (cid:0) , γ / − √ γ / γ (cid:3) such that thefollowing holds. For any ι > and any fixed points u (cid:54) = v ∈ V o , there exists a random variable C = C ( ι, u, v ) measurable with respect to h such that for all δ, t ∈ (0 , , C − δ − χ + ι ≤ D γ,δ ( u, v ) ≤ Cδ − χ − ι , (3) C − exp (cid:110) − t − χ − χ − ι (cid:111) ≤ p γt ( u, v ) ≤ C exp (cid:110) − t − χ − χ + ι (cid:111) . (4)As we now discuss, Theorem 1.1 is an amalgamation of several results, proved in differentsections of the paper. • The Liouville graph distance exponent χ is well defined (see Proposition 5.1) and the (log ofthe) distance concentrates around its mean (see Proposition 3.17). • The distance exponent χ does not depend on the particular choice of u and v as long as theyare fixed and away from the boundary (see Proposition 5.1). • Both lower bounds and upper bounds on the Liouville heat kernel can be obtained from thedistance exponent (see (80) and (106)): such bounds are sharp in terms of the power on t inthe exponential as in (4). • The lower bound that χ > χ is a reading from the KPZ relation established in [17], which is applied to bound theminimal number of Euclidean balls of LQG measure at most δ required in order to coverthe line segment joining u and v . Evaluating χ is a major open problem and is not the focusof the present article. We record the bounds here only to show that χ is nontrivial (i.e.,0 < χ < • For γ small, non-trivial upper bounds on χ appear in [13]. In particular, combining Theorem1.1, [13, Theorem 1.2] and [25], one obtains that there exist constants c ∗ , c (cid:48) > χ ∈ (1 − c (cid:48) γ, − c ∗ γ / / | log γ | ) for small γ . In particular, as discussed in [13], this is incompatiblewith Watabiki’s conjecture. For some work toward bounding exponents for a related distance,see [21]. • It is a consequence of [16] and [13] that the Liouville graph distance is not universal acrossdifferent log-correlated fields. Because of Theorem 1.1 and [14], the same holds for theLiouville heat kernel exponent. so that D γ,δ ( u, v ) is a measurable random variable .1 Background and related results Making a rigorous sense of the metric associated with the LQG is a well known major open problem,see [33] for an up-to-date review. In a recent series of works of Miller and Sheffield, the specialcase γ = (cid:112) / u, v ) is assigned a resistanceexponential in the sum of the GFF values at u and v — the return probability for this randomwalk was computed via a computation of the effective resistance of this random network. Before describing our proof strategy, we discuss some of the basic objects that we work with. Thefirst object is the Gaussian free field. There are many approaches for its construction, which wequickly review in Section 2.2. Of importance to us is its construction in terms of integral overspace-time white noise, where the ‘time’ coordinate denotes scale. This allows naturally the splitof the GFF into an independent sum of a ‘coarse’ field, consisting of contributions down to a cutoffscale, and a ‘fine’ field, consisting of the rest.Next, the Gaussian multiplicative chaos built from the Gaussian free field, which we refer to asthe Liouville quantum gravity, can be constructed as a martingale limit of the exponential of thecoarse field associated with the GFF, see e.g. (27). In particular, it can be described as a productof a function depending only on the coarse field of the GFF, by an independent measure determined3y the fine field; this yields a natural separation of scales, which however as we explain below is notquite sufficient for our analysis. For this reason, we often work with appropriate approximations ofthe LQG, see for example (13). In this sketch, we only mention such details when they are crucialto the argument.We can now begin to discuss our proof strategies, starting with the Liouville graph distance.As is often the case, the proof of a scaling statement as in (3) is based on sub-additivity, which inthis case will be with respect to the scale parameter. However, the Liouville graph distance fromthe introduction is not convenient to work with, because of the lack of scale-separation propertiesthat are crucial for sub-additivity. Therefore, our first step is to relate the Liouville graph distanceto an approximate Liouville graph distance, obtained through a specific partitioning procedureof the square according to the LQG content of dyadic squares, see Section 3.1 for details of theconstruction. Since the approximation involves a sequence of refinements, sub-additivity for theapproximate Liouville graph distance is almost built in. However, we need to show that the ap-proximate distance is indeed a good proxy for the distance. This is done in Proposition 3.2. Mostof Section 3 is devoted to its proof, which employs appropriate approximations of the LQG anda-priori estimates of fluctuations of the coarse field of the GFF. A particularly annoying fact is thatthe coarse field fluctuations, which typically are well behaved, cannot be well controlled uniformly,and at places one needs to replace the actual minimizing sequence by a proxy, bypassing some badregions of large fluctuations. This is done in Lemmas 3.12 and 3.13, which employ percolationarguments.The approximate graph distance thus constructed also has better continuity properties in termsof the underlying GFF, and is instrumental in proving that the (logarithm of the) graph distanceconcentrates around its mean, see Proposition 3.17.Once these preliminary tasks are complete, we turn in Section 4 to the study of off-diagonalshort time Liouville heat kernel estimates. (We study the LHK before showing the convergence ofthe distance exponent in order to emphasize that the study of the LHK is independent of the latter.)Recall that the Liouville Brownian motion is constructed from simple Brownian motion by a timechange that depends on the Liouville quantum gravity. In Section 4.1, we prove a lower bound onthe LHK, by a technique introduced in [14]. We construct boxes according to the partition yieldingthe approximate Liouville graph distance. (In reality, we construct smaller sub-boxes in order tohandle differing sizes of blocks in the partition, and bypass some bad regions in the geodesic, usingLemmas 3.12 and 3.13). In order to control the behavior of the LBM, we introduce the notion of‘fast boxes’, which are boxes in which, from many starting points, the LBM does not accumulatemore time change than typical. Boxes are fast with high probability, and using a Peierls argument,we show that they percolate; the lower bound on the LHK is obtained by forcing the LBM to followsuch a path. For the upper bound, we introduce a parallel notion of ‘slow boxes’, which are cellsin which, for enough starting points, the LBM typically accumulates at least a small fraction ofthe typical time-change. Most cells in the partition determining the approximate Liouville graphdistance are slow, and by tracking the accumulated time change, we obtain a lower bound on thetotal accumulated time-change, which translates to a LHK upper bound. We emphasize that theupper bound is obtained in terms of a liminf of the Liouville graph distance exponent, while thelower bound is obtained in terms of a limsup.Finally, in Section 5, we return to the Liouville graph distance. Using concentration inequalities,it is enough to prove convergence for the rescaled expectation of (the logarithm of) the approximateLiouville graph distance. Separation of scales is built into the definition, however translation4nvariance is not (due to boundary effects). Further, even though the approximate Liouville graphdistance uses refinements in its construction and thus separation of scales, it still suffers from lackof independence across scales. These two factors prevent the direct use of sub-additivity. To obtainthe latter, we introduce yet another version of the Liouville graph distance, which does possess therequired invariance property and, while at a given scale, does not depend on the fine field in slightlysmaller scales. A coupling argument allows us to couple the two distances, and sub-additivity canthen be employed to give a point-to-point convergence of the rescaled log-distance (see Lemma 5.3),for points near the center of the box. This is already enough to give an upper bound for arbitrarypoints. To give a lower bound, it is not enough to control point-to-point distances, and we need tocontrol point to boundary distances for small enough sub-boxes. The latter estimate involves thepoint-to-point estimate and a percolation argument, see Lemma 5.4.Various preliminaries are collected for the convenience of the reader in Section 2. We alsoinclude, in Section 2.5, a derivation of rough estimates on the distance exponent. These estimatesare not expected to be sharp.
We say that the events E = E δ occur with high probability (with respect to δ ) if there exists aconstant c >
0, depending on γ, { E δ } only, so that P ( E δ ) ≥ − δ c for all small δ >
0. For α > E = E δ occur with α -high probability, if P ( E δ ) ≥ − δ α for all small δ > F ( · ) and G ( · ) we write F = O ( G ) (alternatively, Ω( G )) if thereexists an absolute constant C > F ≤ CG (respectively ≥ CG ) everywhere in theirdomain. We write F = Θ( G ) if F is both O ( G ) and Ω( G ). If the constant depends on variables x , x , . . . , x n , we change these notations to O x ,x ,...,x n ( G ) and Ω x ,x ,...,x n ( G ) respectively. Wedenote by C, c, C (cid:48) , c i etc positive universal constants. For parameters or variables p i , we write C = C ( p , . . . , p k ) if C is a positive constant that depends only on p , . . . , p k . For example, C ( γ )is a positive constant that may depend on γ .For v ∈ R and r >
0, we denote by B r ( v ) the (open) Euclidean ball centered at v of radius r . For i ≥ C i the collection of centers for all dyadic squares of side length 2 − i contained in V . That is, with o LB = (0 , C i = { o LB + (2 − i − , − i − ) + ( j · − i , k · − i ) : 0 ≤ j, k ≤ i − } . (5)Note that | C i | = 2 i .A box B is a square in R . We denote by s B the side of B and by c B its center. We say that abox B is a dyadic box if, for some i ∈ N , s B = 2 − i and c B ∈ C i . We say that a Euclidean ball B isa dyadic ball if, for some i ∈ N , the radius of B is 2 − i and the center of B is in C i . Finally, we use | · | to denote the Euclidean distance and | · | ∞ to denote the (cid:96) ∞ norm. The next lemma is a consequence of the the Borell–Sudakov-Tsirelson Gaussian isoperimetric in-equality ([8, 37]). 5 emma 2.1.
For any constant c > there exists C > such that the following holds. Let X = ( X , . . . , X n ) be a centered Gaussian process with max ≤ i ≤ n Var X i = σ . Let B ⊆ R n suchthat P ( X ∈ B ) ≥ c . Then for λ ≥ Cσ , P (min x ∈ B | X − x | ∞ ≥ λ ) ≤ Ce − ( λ − Cσ )22 σ . Proof.
Let X = A Z where Z is a Gaussian vector whose components are i.i.d. standard Gaussianvariables. Set ˜ B = { ˜ x : A ˜ x ∈ B } . By the Cauchy-Schwarz inequality and the fact that the (cid:96) -normfor any row vector in A is at most σ , we obtain that | Az − B | ∞ ≥ λ implies | z − ˜ B | ≥ λ/σ for all z ∈ R n . Therefore, P (min x ∈ B | X − x | ∞ ≥ λ ) ≤ P (min ˜ x ∈ ˜ B | Z − ˜ x | ≥ λ/σ ) . (6)On the other hand, by assumption, P ( Z ∈ ˜ B ) ≥ c . Combining this with (6) and the standardBorell–Sudakov-Tsirelson inequality [37, 8], see also [24, (2.9)], yields the lemma.The next lemma is a consequence of Lemma 2.1. See, e.g., [24, (7.4), (2.26)] as well as discussionsin [24, Page 61]. Lemma 2.2.
Let { G z : z ∈ B } be a Gaussian field on a (countable) index set B . Set σ =sup z ∈ B Var( G z ) . Then, for all a > , P ( | sup z ∈ B G z − E sup z ∈ B G z | ≥ a ) ≤ e − a σ . We will often need to control the expectation of the maximum of a Gaussian field in terms ofits covariance structure. This is achieved by Fernique’s criterion [18]. We quote a version suited toour needs, which follows straightforwardly from the version in [1, Theorem 4.1].
Lemma 2.3.
There exists a universal constant C F > with the following property. Let B ⊂ V denote a box of side length b and assume { G v } v ∈ B is a mean zero Gaussian field satisfying E ( G v − G u ) ≤ | u − v | /b , for all u, v ∈ B .
Then there exists a version of { G v } which is spatially continuous such that E max v ∈ B G v ≤ C F . Remark 2.4.
When the condition of Lemma 2.3 holds, we always in the sequel consider thecontinuous version of the underlying Gaussian process. This allows us to consider the maximumof the process over various subsets, with the maximum being a bona fide random variable. We usebelow this convention without further comment.
The GFF h is not defined pointwise, however as a distribution it is regular enough so that its circleaverages are bona fide Gaussian variables. In particular, if | v − ∂ V | > δ let h δ ( v ) denote the average6f h along a circle of radius δ around v . Then, the circle average process { h δ ( v ) : v ∈ V , | v − ∂ V | > δ } is a centered Gaussian process with covarianceCov( h δ ( v ) , h δ (cid:48) ( v (cid:48) )) = π (cid:90) ∂B δ ( v ) × ∂B δ (cid:48) ( v (cid:48) ) G V ( z, z (cid:48) ) µ vδ ( dz ) µ v (cid:48) δ (cid:48) ( dz (cid:48) ) , (7)where the normalization factor of π is chosen to conform with the literature and ensure that theGFF is log-correlated. Here µ vr is the uniform probability measure on ∂B r ( v ), the boundary of B r ( v ), and G V ( z, z (cid:48) ) is the Green function for V , which is defined by G V ( z, z (cid:48) ) = (cid:90) (0 , ∞ ) p V ( s ; z, z (cid:48) ) ds . (8)Here and henceforth, for any A ⊂ R , p A ( s ; z, z (cid:48) ) is the transition probability density of Brownianmotion killed upon exiting A . More precisely, p A ( s ; z, · ) is the unique (up to sets of Lebesguemeasure 0) nonnegative measurable function satisfying (cid:90) B p A ( s ; z, z (cid:48) ) dz (cid:48) = P z ( B s ∈ B, τ A > s ) , (9)for all Borel measurable subsets B of R where P z ( · ) is the law of the two-dimensional standardBrownian motion { B t } t ≥ starting from z and τ A is the exit time of { B t } t ≥ from A . It was shownin [17] that there exists a version of the circle average process which is jointly H¨older continuousin v and δ of order ϑ < / { ( v, δ ) : v ∈ V , | v − ∂ V | > δ } . In particular,the LQG measure can be defined as the limit of M ◦ γ,δ ( dv ) = e γh δ ( v ) − γ log(1 /δ ) L ( dv ) , (10)where L denotes the two-dimensional Lebesgue measure (restricted to V ), and the superscript ◦ indicates a circle average approximation is taken. Similarly, the functional in (1) can be defined byreplacing there h with h δ and then taking the limit as δ → W distributed on R × R + refers to a centered Gaussian process { ( W, f ) : f ∈ L ( R × R + ) } whose covariance kernelis given by E ( W, f )( W, g ) = (cid:82) R × R + f gdzds . An alternative and suggestive notation for ( W, f ),which we will use in the sequel, is (cid:82) R × R + f W ( dz, ds ). For any B ∈ B ( R ) and I ∈ B ( R + ), we let (cid:82) B × I f W ( dz, ds ) denote the variable (cid:82) R × R + f B × I W ( dz, ds ), where f B × I is the restriction of f to B × I . Now define the Gaussian process { ˜ h ˜ δδ ( v ) : v ∈ V , ˜ δ > δ > } by˜ h ˜ δδ ( v ) = √ π (cid:90) V × ( δ , ˜ δ ) p V ( s/ v, w ) W ( dw, ds ) (11)(for notation convenience, we will drop the superscript ˜ δ when ˜ δ = ∞ ). Then ˜ h δ is anotherapproximation of the GFF as δ →
0, known as the white noise decomposition. The LQG measureas well as the functional in (1) can also be approximated by taking a limit with the white noisedecomposition, and it has been shown in [31, Theorem 5.5] and [35] that the limiting law is thesame as with the circle average approximation. For future reference we note that for u, v ∈ V , theChapman-Kolmogorov equations give that E (˜ h ˜ δδ ( u )˜ h ˜ δδ ( v )) = π (cid:90) ˜ δ δ p V ( t ; u, v ) dt. (12)7e will, in fact, consider an approximation of the white noise decomposition. To this end, wedefine for 0 < δ < ˜ δ ≤ ∞ η ˜ δδ ( v ) = √ π (cid:90) V × ( δ , ˜ δ ) p V ∩ B − s / | log s − |∧ − ( v ) ( s/ v, w ) W ( dw, ds ) , (13)where we recall that B r ( v ) is the Euclidean ball of radius r centered at v . Here we truncatethe transition density upon exiting B − s / | log s − |∧ − ( v ) (or exiting V ) so that each scale in thehierarchical structure of the process η ˜ δδ (that is, the process { η δ (cid:48) δ (cid:48) ( v ) : v ∈ V } for some δ ≤ δ (cid:48) ≤ ˜ δ/ ∧ − ” in the definition is to ensure (117) in Section 5 and isotherwise not important. Again, for notation convenience, we will drop the superscript ˜ δ when˜ δ = ∞ . Lemma 2.5.
With notation as above, we have that
Var(˜ h δ ( u ) − ˜ h δ ( v )) + Var( η δ ( u ) − η δ ( v )) = O ( | u − v | δ ) , uniformly in δ > , u, v ∈ V . Proof.
We will give a proof for the bound on Var(˜ h δ ( u ) − ˜ h δ ( v )). The bound on Var( η δ ( u ) − η δ ( v ))follows from a similar argument. Our proof follows [31, Appendix A], where a version of Lemma 2.5is proved, with | u − v | = O ( δ ) and where both u, v are away from ∂ V . We will adapt their argumentsand show that these restrictions are not needed. Because of (12), estimates on p V ( t ; u, v ) will playan important role. Note that p V ( t ; u, v ) = e − | u − v | t πt q ( t ; u, v ) where q ( t ; u, v ) = P ( B s − st B t + u + st ( v − u ) ∈ V for all s ≤ t ) . Therefore, we get that π (cid:90) ∞ δ ( p V ( t ; u, u ) − p V ( t ; u, v )) dt ≤ (cid:90) ∞ δ t ( q ( t ; u, u ) − q ( t ; u, v )) dt + (cid:90) ∞ δ t q ( t ; u, v )(1 − e − | u − v | t ) dt . Using the fact that 1 − e − x ≤ √ x for x >
0, we get that (cid:90) ∞ δ q ( t ; u, v ) 12 t (1 − e − | u − v | t ) dt ≤ (cid:90) ∞ δ | u − v | t / dt ≤ | u − v | δ . (14)Let τ = min { s ≤ t : B s − st B t + u (cid:54)∈ V } and τ (cid:48) = min { s ≤ t : B s − st B t + u + st ( v − u ) (cid:54)∈ V } wherewe use the convention that min ∅ = ∞ . Then we see that | q ( t ; u, u ) − q ( t ; u, v ) | ≤ P ( τ ≤ t, τ (cid:48) > t ) + P ( τ (cid:48) ≤ t, τ > t ) . (15)The two terms on the right hand side of (15) can be bounded in a similar way. As a result, we justbound P ( τ ≤ t, τ (cid:48) > t ). To this end, we denote by L , . . . , L the four boundary segments of V ,and let τ i = min { s ≤ t : B s − st B t + u ∈ L i } for i = 1 , . . . ,
4. It is clear that P ( τ ≤ t, τ (cid:48) > t ) ≤ (cid:80) i =1 P ( τ i ≤ t, τ (cid:48) > t ) . Assume that L is the left boundary of V . The event τ ≤ t implies that min s ∈ [0 ,t ] ( B s − st B t ) ≤ − u ,while the event τ (cid:48) > t implies that min s ∈ [0 ,t ] ( B s − st B t ) ≥ − (1 − st ) u − st v for some 0 < s ≤ t .8ere we use the notation w for the x -coordinate of some w ∈ R . Thus, the intersection is possibleonly if v > u , and in that case we obtain that P ( τ ≤ t, τ (cid:48) > t ) ≤ P ( min s ∈ [0 ,t ] ( B s − st B t ) ∈ [ − v , − u ]) = P ( max s ∈ [0 ,t ] ( B s − st B t ) ∈ [ u , v ]) . By the reflection principle, for v > u we have that P ( max s ∈ [0 ,t ] ( B s − st B t ) ∈ [ u , v ]) = (cid:90) v u − ddx (cid:18) p ( t ; 0 , x ) p ( t ; 0 , (cid:19) dx = e − u /t − e − v /t ≤ C | u − v |√ t . Repeating this argument for i = 1 , . . . ,
4, we conclude that P ( τ ≤ t, τ (cid:48) > t ) ≤ C | u − v |√ t , which gives, using (15), that q ( t ; u, u ) − q ( t ; u, v ) = O ( | u − v | / √ t ). Therefore, (cid:90) ∞ δ t [ q ( t ; u, u ) − q ( t ; u, v )] dt = O ( | u − v | δ ) . Combined with (14) we get that π (cid:90) ∞ δ [ p V ( t ; u, u ) − p V ( t ; u, v )] dt = O ( | u − v | δ ) . (16)Interchanging the roles of u and v , we obtain the same estimate for π (cid:82) ∞ δ [ p V ( t ; v, v ) − p V ( t ; u, v )] dt .Recalling (12), we haveVar(˜ h δ ( u ) − ˜ h δ ( v )) = π (cid:90) ∞ δ [ p V ( t ; u, u ) − p V ( t ; u, v )] dt + π (cid:90) ∞ δ [ p V ( t ; v, v ) − p V ( t ; u, v )] dt , and substituting (16), we complete the proof of the lemma. Lemma 2.6.
Uniformly in δ > , a > and k ≥ , we have sup u ∈ V P (cid:18) max v : | v − u |≤ kδ | η δ ( v ) − η δ ( u ) | ≥ a log( k + 1) (cid:19) = O (1) e − Ω( a ) . E max u,v ∈ V , | u − v |≤ δ (cid:16) | ˜ h δ ( u ) − ˜ h δ ( v ) | + | η δ ( v ) − η δ ( u ) | (cid:17) = O ( (cid:112) log δ − ) . Proof.
By Lemma 2.5, we can apply Lemma 2.3 and deduce that for all u ∈ VE max v ∈ V : | u − v |≤ δ (cid:16) | ˜ h δ ( u ) − ˜ h δ ( v ) | + | η δ ( v ) − η δ ( u ) | (cid:17) = O (1) . Combined with Lemma 2.2, this yields the second inequality by considering a union bound over u ∈ C (cid:100) log δ − (cid:101) +1 (recall the definition of C i in (5)). In addition, by a similar argument, we get thatuniformly in a, k, δ ,sup u ∈ V P (cid:32) max v : v ∈ C (cid:100) log2 δ − (cid:101) +1 , | v − u |≤ kδ max x : | x − v |≤ δ | η δ ( v ) − η δ ( x ) | ≥ a log( k + 1) / (cid:33) ≤ e − Ω( a ) . (17)9ince Var( η δ ( v ) − η δ ( u )) = O (log( k + 1)) for all | v − u | ≤ kδ , a union bound yields that uniformlyin the same parameters,sup u ∈ V P (cid:32) max v : v ∈ C (cid:100) log2 δ − (cid:101) +1 , | v − u |≤ kδ | η δ ( v ) − η δ ( u ) | ≥ a log( k + 1) / (cid:33) ≤ O (1) e − Ω( a ) . Combined with (17) and the fact thatmax v : | v − u |≤ kδ | η δ ( v ) − η δ ( u ) | ≤ max v : v ∈ C (cid:100) log2 δ − (cid:101) +1 , | v − u |≤ kδ max x : | x − v |≤ δ | η δ ( v ) − η δ ( x ) | + max v : v ∈ C (cid:100) log2 δ − (cid:101) +1 , | v − u |≤ kδ | η δ ( v ) − η δ ( u ) | , this yields the first inequality of the lemma.Recall the definition of C i in (5). By a simple union bound, we get that E max v ∈ C (cid:98) log2 δ − (cid:99) ˜ h δ ( v ) ≤ δ − + O (1) for all δ > . Combined with Lemma 2.2 and Lemma 2.6, we obtain that uniformly in λ > δ > P (max v ∈ V ˜ h δ ( v ) ≥ δ − + λ ) ≤ O (1) e − λ
22 log δ − O (1) . (18) Lemma 2.7.
We have P (max v ∈ V max j ≥ | ˜ h − j ( v ) − η − j ( v ) | ≥ λ ) ≤ O (1) e − Ω( λ ) .Proof. We may and will assume that λ > C for some constant C large enough. For i ≥ i ( v ) = ˜ h − i +1 − i ( v ) − η − i +1 − i ( v ) and write ∆ ( v ) = ˜ h ( v ) − η ( v ). Let τ i = min { t > | B t − t i B − i | ∞ ≥ i − i / } where { B t } is a standard Brownian motion. Uniformly in v ∈ V and i we have Var∆ i ( v ) = O (1) P ( τ i ≤ − i ) = O (1) e − Ω( i ) . (19)By Lemma 2.5 and (19), we get that uniformly in u, v ∈ V Var(∆ i ( v ) − ∆ i ( u )) ≤ O (1) min { e − Ω(( i +1) ) , i | u − v |} . (20)Combined with Lemmas 2.2 and 2.3, this gives that P ( max u ∈ C i + (cid:98) i (cid:99) max v : | v − u |≤ i − · − i | ∆ i ( u ) − ∆ i ( v ) | ≥ λ ( i + 1) − ) ≤ O (1) e − Ω(1) λ ( i +1) . (21)In addition, by (19) and a union bound, we get that P ( max u ∈ C i + (cid:98) i (cid:99) | ∆ i ( u ) | ≥ λ ( i + 1) − ) ≤ O (1) e − Ω(1) λ ( i +1) . (22)Note that for any j ≥ v ∈ V max j ≥ | ˜ h − j ( v ) − η − j ( v ) | ≤ (cid:88) i ≥ ( max u ∈ C i + (cid:98) i (cid:99) max v : | v − u |≤ i − · − i | ∆ i ( u ) − ∆ i ( v ) | + max u ∈ C i + (cid:98) i (cid:99) | ∆ i ( u ) | ) . Combined with (21) and (22), this completes the proof of the lemma.10efine ˆ h ˜ δδ ( v ) = √ π (cid:90) R × ( δ , ˜ δ ) p ( s/ v, w ) W ( dw, ds ) . (23)The process ˆ h ˜ δδ has better invariance properties than the process ˜ h ˜ δδ from (11). By a direct compu-tation we obtain that for all ˜ δ > δ > v, w ∈ V ,Var(ˆ h ˜ δδ ( v ) − ˆ h ˜ δδ ( w )) ≤ (cid:90) ∞ δ − e − | v − w | s s ds ≤ (cid:90) ∞ δ | v − w | s ds ≤ | v − w | δ . (24)For ξ >
0, write V ξ = { v ∈ V : | v − ∂ V | ≥ ξ } . Lemma 2.8.
For any ξ > , there exists a constant C = C ( ξ ) > so that for all λ > P (max v ∈ V ξ max j ≥ | ˆ h − j ( v ) − η − j ( v ) | ≥ λ ) ≤ Ce − C − λ . (25) Proof.
The proof is very similar to that of Lemma 2.7. Define ∆ i ( v ) = ˆ h − i +1 − i ( v ) − η − i +1 − i ( v ) for i ≥ ( v ) = η ( v ). Similarly to (20), we obtain that uniformly in i, u, v ∈ V ξ ,Var(∆ i ( v ) − ∆ i ( u )) ≤ O (1) min( e − Ω( i ) , i | u − v | ) , where the O (1) and the Ω terms depend on ξ only. Thus, following the derivation as in Lemma 2.7,we obtain an analogue of (20) and (21) in our setting, and then conclude the proof of the currentlemma. Lemma 2.9.
For < ξ, κ ≤ κ < , let V , V ⊆ V ξ be two boxes with side lengths κ and κ respectively. Let θ : V → V be such that θv = av + b for a = κ /κ and some b ∈ R so that θ maps V onto V . Then, there exists a coupling of ζ (1) = { ζ (1) δ ( v ) : v ∈ V , < δ ≤ } and ζ (2) = { ζ (2) aδ ( v ) : v ∈ V , < δ ≤ } such that the following hold.(1) The marginal laws of ζ (1) and ζ (2) are respectively the same as { η δ ( v ) : v ∈ V , < δ ≤ } and { η aδ ( v ) : v ∈ V , < δ ≤ } .(2) There exists C = C ( ξ, κ , κ ) > such that P (max v ∈ V max j ≥ | ζ (1)2 − j ( v ) − ζ (2) a − j ( θv ) | ≥ λ ) ≤ Ce − C − λ . Proof.
By (24) we see that Var(ˆ h a ( u ) − ˆ h a ( v )) = O ( | u − v | ) for all u, v ∈ V ξ where the O (1) dependsonly on ( ξ, a ). In addition, by a straightforward computation we get that Var(ˆ h a ( u )) = O (1).Therefore, Lemmas 2.2 and 2.3 imply that P (max u ∈ V ξ | ˆ h a ( u ) | ≥ λ ) ≤ Ce − C − λ , where again C is a positive constant depending on ( ξ, κ , κ ). Combined with (25), this gives that P (max v ∈ V max j ≥ | ˆ h aa − j ( θv ) − η a − j ( θv ) | ≥ λ ) ≤ Ce − C − λ . (26)By the translation invariance and scaling invariance property of the ˆ h -process we see that { ˆ h − j ( v ) : v ∈ V , j ≥ } has the same law as { ˆ h aa − j ( θv ) : v ∈ V , j ≥ } . Therefore, we can construct a coupling of ((ˆ h (1) , ζ (1) ) , (ˆ h (2) , ζ (2) )) such that11 (ˆ h (1) ) − j ( v ) = (ˆ h (2) ) aa − j ( θv ) for all v ∈ V , j ≥ • for i ∈ { , } the pair (ˆ h ( i ) , ζ ( i ) ) is identically distributed as the pair (ˆ h, η ).Combined with (25) (noting that V ⊆ V ξ ) and (26), this completes the proof of the lemma. For any γ < M γ is defined in [17] as the almost sure weak limit of the sequence of measures M ◦ γ,n given by M ◦ γ,n = e γh − n ( z ) − nγ / L ( dz ) , (27)where L is the Lebesgue measure on R . The LQG measure is by now well understood (see e.g.,[23, 17, 30, 31, 35, 4]), and in particular one has the existence of the limit in (27), the uniqueness inlaw for the limiting measure via different approximation schemes, as well as a KPZ correspondencethrough a uniformization of the random lattice seen as a Riemann surface. In particular, it followsfrom martingale convergence that the sequence e γ ˜ h − n ( z ) − nγ / L ( dz ) (28)almost surely weakly converges to a Gaussian Multiplicative Chaos, and then it follows e.g. from[17, 35] that the limit is precisely M γ . This approximation of the LQG measure via the white noisedecomposition will be particularly useful to us.Of particular relevance to the present article is the following boundedness result on the positiveand negative moments of the LQG measure, proved in [23, 34] (see also [31, Theorems 2.11, 2.12]). Lemma 2.10.
For any < p < /γ , we have E ( M γ ( V )) p < ∞ . For any non-empty Euclideanball A ⊆ V , we have E ( M γ ( A )) p < ∞ for all p < . We will need a slightly stronger version of Lemma 2.10. Let B ⊆ V be a square or a Euclideanball of diameter ξ >
0, and define˜ M γ,δ ( B ) = lim n →∞ (cid:90) B e γ ˜ h δ − n ( z ) e − γ Var(˜ h δ − n ( z )) L ( dz ) , ˜ M γ,δ,η ( B ) = lim n →∞ (cid:90) B e γ ˜ η δ − n ( z ) e − γ Var(˜ η δ − n ( z )) L ( dz ) , (29)where the existence of the almost sure limit follows from the fact that ˜ M γ,δ ( B ) (respectively˜ M γ,δ,η ( B )) forms a sequence of martingales (c.f. [31]). By a straightforward adaption of theproof of Lemma 2.10, we obtain that E ( ξ − ˜ M γ,δ ( B )) p ≤ C γ,p for all 0 < p < /γ and δ ≤ ξ , (30) E ( ξ − ˜ M γ,δ ( B )) p ≤ C γ,p for all p < δ ≤ ξ , (31)where C γ,p is a positive constant depending only on ( γ, p ). (Tail estimates for ˜ M γ,δ,η will be providedin the course of the proof of Proposition 3.2 below.)12 .4 Liouville Brownian motion To precisely define the Liouville Brownian Motion, we revisit (1). We define the positive continuousadditive functional (PCAF) with respect to M γ as F ( t ) := lim n →∞ (cid:90) t e γ ˜ h − n ( X s ) − γ Var(˜ h − n ( X s )) ds, (32)where the limit exists almost surely due to [20, 3]. It is not hard to check, using the a.s. convergencediscussed in Section 2.3, that the limit in (32) does not depend on whether circle averages or whitenoise approximations are used. With F ( t ) well-defined, the LBM is defined as Y t := X F − ( t ) , andthe LHK p γt ( x, y ) is then constructed in [19] as the density of the Liouville semigroup with respectto M γ as in (2). The LBM and its heat kernel capture geometric information encoded in M γ ; forexample, the KPZ formula was derived from the Liouville heat kernel in [10, 6].We will need the following lemma, which is essentially proved in [25]. We remark that in [25]the authors work with GFF on a torus but their proofs adapt to our case with minimal change andwe omit further details on such adaption. See also [2] for related estimates. Lemma 2.11.
For any constants α , α > there exists a constant α = α ( α , α , γ ) > andrandom variables c , c , c > measurable with respect to the GFF, so that for all t > , p γt ( u, v ) ≤ c ( t − α + 1) P u ( | Y t − t α − v | < t α ) + c t α +2 e − c t α for all | u − v | ≤ t − α . Proof.
With quantifiers as in the statement of the lemma, we have from [25, Theorem 4.2] that p γt α ( x, y ) ≤ c t α +2 e − c t α for all | x − y | ≥ t α . (33)In addition, by [25, Lemma 4.3], sup x,y ∈ V p γt ( x, y ) ≤ c ( t − + 1) . The lemma follows from the last two displays and the decomposition p γt ( u, v ) = (cid:90) B ( v,t α ) p γt − t α ( u, x ) p γt α ( x, v ) M γ ( dx ) + (cid:90) V \ B ( v,t α ) p γt − t α ( u, x ) p γt α ( x, v ) M γ ( dx ) . The following are non-optimal bounds on the Liouville graph distance. Our main goal in recordingthe following lemma is to illustrate that the distance exponent is non-trivial (i.e., strictly between0 and 2).
Lemma 2.12.
For < γ < there exists c > depending only on γ such that for all fixed u, v ∈ V we have c − o (1) < E log D γ,δ ( u,v )log δ − ≤ γ / − √ γ / γ + o (1) where the o (1) term tends to 0 as δ .In addition, D γ,δ ( u, v ) ≥ δ − c with high probability. roof. The upper bound on D γ,δ ( u, v ) follows from the KPZ relation derived in [17, Proposition1.6], which is used to bound the number of Euclidean balls of LQG measure at most δ required inorder to cover the line segment joining u and v (that is, set X as the line segment joining u and v in [17, Equation (5)], and adjust δ to δ ).To prove the lower bound, it suffices to show that, for some constant c = c ( γ ) > D γ,δ ( u, v ) ≥ δ − c with high probability. To this end, fix c = c ( γ ). Let k δ be the smallest integer so that 2 − k δ ≤ δ c and let C k δ be defined as in (5). By (18), we have that with high probability,max v ∈ V ˜ h − kδ ( v ) ≤ k δ . (34)From (31) and a union bound we have that with high probability,˜ M γ, − kδ ( B ( v, − k δ )) ≥ − . k δ , for all v ∈ C k δ , where ˜ M γ, − kδ is as in (29). Combined with (34), we see that if we choose c small enough we havethat M γ ( B ( v, − k δ )) ≥ δ for all v ∈ C k δ . This implies that any Euclidean ball with LQG measureat most δ has radius at most 2 − k δ +2 . This implies the claimed lower bound on the Liouville graphdistance. In this section, we introduce an approximation for the Liouville graph distance, which will play akey role throughout the paper. The key technical advantage of the approximate Liouville graphdistance is on a version of “separation of randomness”, as codified in Lemma 3.13.
For each box B of side length s B = (cid:15) > c B = v , we define the approximate LQG to be M γ,(cid:15) ( B ) = (cid:15) e γη (cid:15) ( v ) − γ Var( η (cid:15) ( v )) , (35)compare with (27) and (28); the main point in (35) is that one only considers the value of η (cid:15) at thecenter of B . Note also that M γ,(cid:15) does not define a measure, due to the lack of additivity. Fixing δ >
0, we introduce a random δ -partition of V as in the following iterative procedure. Call a box(which may be closed, open, or neither closed or open) that has not been partitioned yet a cell.Whenever M γ,s B ( B ) ≥ δ for a cell B , diadically partition B into four sub-boxes. The iterativeprocedure halts when all cells B satisfy M γ,s B ( B ) < δ . We denote by V δ the final collection of cellsobtained in this procedure. Note that closures of cells may intersect only along their boundary. Weview V δ as a graph, with vertices consisting of the cells in V δ and edges between cells such that theirclosures have intersection with non-empty relative interior (i.e., a nontrivial line segment). For each v ∈ V , we denote by C v,δ the unique cell in V δ which contains v . For two distinct u, v ∈ V definethe approximate Liouville graph distance D (cid:48) γ,δ to be the graph distance between C v,δ and C u,δ in V δ . In addition, we denote by s v,δ the side length of C v,δ . Finally, recall the definitions of events ofhigh probability and of ι -high probability, see Section 1.3. The following proposition justifies ourterminology of approximate LGD. For a fixed ξ >
0, denote V ξ = { v ∈ V : | v − ∂ V | ≥ ξ } . We saythat ( A δ , B δ ) ⊆ V ξ × V ξ is a sequence of ξ -admissible pairs if14 A δ (respectively B δ ) is a single point, or a connected set of diameter at least δ ξ . • The distance between A δ and B δ is at least ξ for all δ .The following lemma, whose proof is postponed, gives an a-priori, coarse bound on the cells in V δ . Lemma 3.1.
For any γ ∈ (0 , , there exist constants C mc , C Mc > (depending only on γ ) suchthat with high probability, each cell C v,δ ∈ V δ has side length δ C mc ≤ s v,δ ≤ δ C Mc . The subscript mc in C mc stands for “minimal cell”, and Mc stands for “maximal cell”. Thevalues of C mc and C Mc are kept fixed throughout the paper. A first approximation step for theLGD is contained in the next proposition. Proposition 3.2.
Fix < ξ < C Mc / . Then, there exists a constant c = c ( γ, ξ ) so that for anysequence of ξ -admissible pairs ( A δ , B δ ) , we have with c -high probability min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ ( x, y ) · e − (log δ − ) . ≤ min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ ( x, y ) · e (log δ − ) . . The proof of Proposition 3.2 follows roughly the following outline.1. In order to get an upper bound on the LGD, we take the geodesic in D (cid:48) γ,δ and construct anefficient covering of this geodesic by Euclidean balls with bounded LQG measure.2. In order to get a lower bound on LGD, we show that any path achieving the LGD will haveto place at least one Euclidean ball in each cell of a path which is candidate for D (cid:48) γ,δ .Item 2 is easier to achieve, since we can apply a more or less straightforward union bound (essentiallydue to the fact that all negative moments exist for LQG measure). In order to prove (the morechallenging) Item 1 (as well as later showing the lower bound on the Liouville heat kernel), it wouldbe ideal if in each cell of V δ , the “fine field” within that cell (roughly speaking the integration overwhite noise within that cell) were almost independent of V δ . While this property holds for a typicalcell, it unfortunately cannot hold uniformly for all cells, for the reason that occasionally some cellwill be neighboring to cells that are of much smaller side lengths (this, roughly speaking, is due tothe fact that LQG measure only has finite positive moment up to a fixed, γ dependent, order). Inorder to address this issue, we employ a technique influenced by percolation theory.Some remark is in order concerning the definition of ξ -admissible pairs. The somewhat strangecondition there is that if A δ (or B δ ) is not a single vertex, then it has to be a connected set thatis moderately large. This assumption is related to the regularity of the random partition V δ — itis possible (though typically the case) that in some places, the random partition is highly irregularbut yet these locations serve as endpoints for the geodesic between A δ and B δ in D (cid:48) γ,δ . The highirregularity will prevent us from building efficient path in D γ,δ . Under our admissibility assumption,it becomes tractable (via a percolation-type argument) since • If A δ is a single vertex, then with high probability it has to be somewhat regular around A δ ; • If A δ is a connected set of moderately large diameter, then when it is irregular around u ∈ A δ ,there exists a regular u (cid:48) ∈ A δ which is close to u .Before providing the proof of Proposition 3.2, we prove a few preparatory lemmas. We beginwith the proof of Lemma 3.1. 15 roof of Lemma 3.1. For (cid:15) > (cid:15) − ∈ Z , we have | C log (cid:15) − | = (cid:15) − (recall (5)). Fix β ∈ ( γ, γ / γ ∈ (0 , P ( max v ∈ C log2 (cid:15) − η (cid:15) ( v ) ≥ βγ log (cid:15) − ) ≤ C(cid:15) β γ − ≤ (cid:15) c , (36)for some c = c ( β ) >
0. On the complement of the event in (36), we have, using Lemma 2.6, that,with high probability, for any box B with side (cid:15) centered at C log (cid:15) − , we have that M γ,(cid:15) ( B ) ≤ (cid:15) γ / − β ) ≤ (cid:15) c (cid:48) for some c (cid:48) = c (cid:48) ( β ) >
0. The bound on the side length for the maximal cellfollows from a similar (simple) computation, and we omit further details.We note that an argument similar to that employed in the proof of Lemma 3.1 shows that thetail of the distribution of log( S δ ) / log δ decays at least exponentially, where S δ is the side length ofthe minimal cell in V δ . This implies that for any u, v ∈ V , E ( log D (cid:48) γ,δ ( u, v )log δ − ) = O γ (1) . (37)In addition, a simple adaption of the argument in [17, Proposition 1.6] (see also [13, Proposition6.2]) gives that E (cid:16) log D γ,δ ( u, v )log δ − (cid:17) = O γ (1) (38)(we remark that these are extremely crude bounds). Thus, combined with (the yet unproven)Proposition 3.2, we obtain the following corollary. Corollary 3.3.
For any u, v ∈ V , we have that (cid:12)(cid:12)(cid:12) E log D γ,δ ( u,v )log δ − − E log D (cid:48) γ,δ ( u,v )log δ − (cid:12)(cid:12)(cid:12) ≤ e − (log δ − ) . . For α >
0, we define E δ,α := (39) { δ C mc ≤ s C ≤ δ C Mc for all cells in V δ } ∩ ∩ m,j ; x,y {| η − m ( x ) − η − m − j ( y ) | ≤ α (cid:112) log δ − log log δ − } , where the last intersection is taken over m, j, x, y such that 1 ≤ m ≤ δ − C mc , 1 ≤ j ≤ ( α log δ − ) ,and | x − y | ≤ − m +3 . Lemma 3.4.
There exists α > such that for all α > α , E δ,α occurs with high probability.Proof. Denote by m = (cid:98) C mc log δ − (cid:99) , j = (cid:98) ( α log δ − ) (cid:99) . Denote by ˜ x the center of thedyadic box of side length 2 − m containing x , and ˜ y the center of the dyadic box of side length 2 − m − j containing y . By the triangle inequality, | η − m ( x ) − η − m − j ( y ) | ≤ | η − m ( x ) − η − m (˜ x ) | + | η − m (˜ x ) − η − m (˜ y ) | + | η − m − j (˜ y ) − η − m − j ( y ) | + | η − m (˜ y ) − η − m − j (˜ y ) | . Next, we will bound the four terms on the right hand side above.For the first three terms, by Lemma 2.6 and a union bound, there exists α > ∩ m i =1 ∩ x ∈ C i { max y : | x − y |≤ × − i | η − i ( x ) − η − i ( y ) | ≤ α (cid:112) log δ − } , i is set as m for the first two terms (note | ˜ x − ˜ y | ≤ × − m if | x − y | ≤ − m +3 ) and is setas m + j for the third term (note m + j ≤ m ).For the fourth term, adjusting the value of α if needed, we obtain from a union bound over thechoice of j and y ∈ C m + j that with high probability ∩ m m =1 ∩ j j =0 ∩ y ∈ C m + j {| η − m − j ( y ) − η − m ( y ) | ≤ α (cid:112) log δ − log log δ − } , Collecting the above results, we conclude that (39) holds with high probability. This, combinedwith Lemma 3.1, completes the proof.The next lemma, whose proof is deferred, compares the approximate LGD with two differentparameters.
Lemma 3.5.
Fix < ξ < C Mc / where C Mc is specified in Lemma 3.1. For any sequence of ξ -admissible pairs ( A δ , B δ ) and any function δ (cid:48) = δ (cid:48) ( δ ) < δ , it holds with high probability that min u ∈ A δ ,v ∈ B δ D (cid:48) γ,δ (cid:48) ( u, v ) ≤ min u ∈ A δ ,v ∈ B δ D (cid:48) γ,δ ( u, v )( δ/δ (cid:48) ) e (log δ − ) . . (40)We remark that from the definition, we have the following converse to (40): D (cid:48) γ,δ (cid:48) ( u, v ) ≥ D (cid:48) γ,δ ( u, v ) . (41)In the next definition we formulate ingredients that will be useful in the proofs of Lemma 3.5 andProposition 3.2. Recall that s B denotes the side length of a box B , see Section 1.3. Definition 3.6.
Let B be a box with side length s B . Let B large be a box concentric with B and withside length s B .For a dyadic (cid:15) > , denote by B ( B, (cid:15) ) (respectively, B ∂ ( B, (cid:15) ) ) the collection of dyadic boxes in V with side lengths (cid:15)s B , which lie in B large (respectively, whose closures intersect ∂B ).For δ > , let Ψ B,δ be the number of cells in V δ that are contained in B and touch the boundaryof B (if B is contained in a cell then we set Ψ B,δ = 1 ). Let Φ B,δ be the minimal number ofEuclidean balls with LQG measure at most δ that covers ∂B .For λ > , define the event E δ,B,(cid:15),λ (respectively, E (cid:48) δ,B,(cid:15),λ ) to be the following: there exists asequence of neighboring boxes B (cid:48) , . . . B (cid:48) d ⊆ B large \ B which encloses B such that • B (cid:48) i ∈ B ( B, (cid:15) ) for each ≤ i ≤ d . • Ψ B (cid:48) i ,δ ≤ λ for each ≤ i ≤ d (respectively Φ B (cid:48) i ,δ ≤ λ for each ≤ i ≤ d ). (In Definition 3.6, by two boxes neighboring each other we mean that the intersection of theirclosures contains a non-trivial line segment. By a sequence enclosing B we mean that it separates B from V ∩ ∂B large in V .)As we have announced earlier, the proofs for Lemma 3.5 and Proposition 3.2 employ percolation-type arguments. More precisely, for a dyadic box B , we consider B (cid:48) ∈ B ( B, (cid:15) ) and ˜ B ∈ B ∂ ( B (cid:48) , t/(cid:15) ).If the LQG measures (or respectively approximate LQG) of all ˜ B ’s are less than some value µ , wecall B (cid:48) an open (in the percolation sense) box. When B is a cell, we will show that each B (cid:48) ∈B ( B, (cid:15) ) is open with large probability by setting (cid:15) and t appropriately, and that the openness of all B (cid:48) ∈ B ( B, (cid:15) ) are essentially independent events. Therefore, by standard arguments in percolation17heory (in our case a straightforward union bound suffices), one can find an open path enclosing B . The union of these enclosures along all cells in the geodesic of D (cid:48) γ,δ then gives an approximatelyminimizing path. To compare D (cid:48) γ,δ with D (cid:48) γ,δ (cid:48) and D γ,δ , we respectively set µ to be ( δ (cid:48) ) and δ / α and C mc , see Lemma 3.4 and Lemma 3.1. Lemma 3.7.
Let α > max { α , C mc } . For < δ (cid:48) = δ (cid:48) ( δ ) ≤ δ , let (cid:15) = min { − n : 2 n ≤ C mc log δ − } and λ = ( δ/δ (cid:48) ) e (log δ − ) . . For each dyadic box B with side length s = s B = 2 − m , ≤ m ≤ C mc log δ − , we have P ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ E cδ (cid:48) ,B,(cid:15),λ ) ≤ δ C mc +10 . (42) Furthermore, for any fixed x ∈ B large and any fixed ι > P ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ { D (cid:48) γ,δ (cid:48) ( x, ∂B large ) > δ − ι ( δ/δ (cid:48) ) } ) ≤ δ ι/ . (43) Proof.
Let t be a dyadic such that log t − ≥ (log δ − ) . , to be determined below. Write K = 1 /(cid:15) .Suppose B ( B, (cid:15) ) = { B (cid:48) i } . Write B (cid:48) i = B ∂ ( B (cid:48) i , t/(cid:15) ), then each box in B (cid:48) i has side length ts . By(39), we see that on the event { M γ,s ( B ) ≤ δ } ∩ E δ,α , for all ˜ B ∈ B ( B, t ) ∪ B ∂ ( B large , t ) we have M γ,ts ( ˜ B ) ≤ δ e γα √ log δ − log log δ − t e γη (cid:15) sts ( c ˜ B ) − γ Var( η (cid:15) sts ( c ˜ B )) . (Recall that c ˜ B denotes the center of ˜ B .) By a union bound and the fact that B (cid:48) i ⊂ B ( B, t ) ∪B ∂ ( B large , t ), we have that P (max ˜ B ∈B (cid:48) i η (cid:15) sts ( c ˜ B ) ≤ . t − ) ≥ − t . , (44)where we have used that |B (cid:48) i | ≤ (cid:15)/t ≤ /t . On the event in (44), we have M γ,ts ( ˜ B ) ≤ δ t . . (45)To prove (42), we take t = (cid:15) −(cid:100) . λ (cid:101) (this implies that t − . ≥ λ . ≥ δ/δ (cid:48) and therefore δ t . ≤ δ (cid:48) ). Fix p = t . and κ = 2. Combined with (45), we see that there exist events E B (cid:48) i , open measurable with respect to { η (cid:15) sts ( c ˜ B ) : ˜ B ∈ B (cid:48) i } such that (cid:40) P ( E B (cid:48) i , open ) ≥ − p, {E B (cid:48) i , open , i ∈ I } and {E B (cid:48) i (cid:48) , open , i (cid:48) ∈ I (cid:48) } are independent if | B (cid:48) i − B (cid:48) i (cid:48) | ≥ κ(cid:15)s for all i, i (cid:48) , (46)and ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ E B (cid:48) i , open ) ⊆ { Ψ B i ,δ (cid:48) ≤ λ } . (Note that the parameter 4 C mc in the choice of (cid:15) ensures that (cid:15) s log (cid:15) s ≤ (cid:15)s , and that 4 /t ≤ λ .)We are now ready to complete the proof of (42) by finding an open enclosure of B , i.e., a sequence18f neighboring boxes in B ( B, (cid:15) ) enclosing B such that E B (cid:48) , open occurs for each B (cid:48) in the sequence.To this end, we employ a standard percolation argument. Suppose that such an enclosing path doesnot exist. Then, by duality, there exists a sequence of boxes B (cid:48) i , . . . , B (cid:48) i (cid:96) joining ∂B and ∂B large such that the closures of consecutive boxes B (cid:48) i r and B (cid:48) i r +1 intersect (possibly at a single point) forall r , and none of E B (cid:48) ir , open ’s occurs. Since (cid:96) ≥ K , there are at most 4(2 K + 1) × (cid:96) such sequencesfor a fixed (cid:96) . For each such sequence, one can find at least (cid:96)/ (2 κ + 1) boxes B (cid:48) i r ’s with pairwisedistance at least κ(cid:15)s . Consequently, for each fixed such sequence, the events E B (cid:48) ir ’s are mutuallyindependent. It follows that P (no open enclosure) ≤ ∞ (cid:88) (cid:96) = K K + 1)8 (cid:96) p (cid:96)/ (2 κ +1) ≤ K (8 p κ +1 ) K , (47)provided that κ is fixed and p = o (1). Substituting p = t . and κ = 2, we see that 9 K (8 p κ +1 ) K ≤ δ C mc +10 . This completes the proof of (42), noting there is no open enclosure on the event { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ E cδ (cid:48) ,B,(cid:15),λ .In order to prove (43), we take t = max { − r : 2 − r ≤ δ . · ι ( δ (cid:48) /δ ) } . Denote by ˜ B i , 0 < i ≤ /t all the (closed) boxes in B ( B, t ) which intersect the horizontal line passing through x . By thedefinition of t , one has δ t . ≤ ( δ (cid:48) ) . We argue next in a similar way to the derivation of (45): onthe event { M γ,s ( B ) ≤ δ } ∩ E δ,α , we have η (cid:15) sts ( c ˜ B i ) ≤ . t − for all 1 ≤ i ≤ /t ⇒ M γ,ts ( ˜ B i ) ≤ δ t . for all 1 ≤ i ≤ /t ⇒ D (cid:48) γ,δ (cid:48) ( x, ∂B large ) ≤ t ≤ δ − ι ( δ/δ (cid:48) ) . Since η (cid:15) sts ( v ˜ B i ) is a centered Gaussian variable with variance log( (cid:15) /t ) ≤ | log t | , we have by a unionbound that P ( η (cid:15) sts ( v ˜ B i ) ≤ . t − for all i ) ≥ − t t . ≥ − δ ι/ . This completes the proof of(43). Proof of Lemma 3.5.
Let u ∈ A δ , v ∈ B δ be such that min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ ( x, y ) = D (cid:48) γ,δ ( u, v ) =: d ,and suppose C , · · · , C d is a sequence of neighboring cells in V δ joining u to v , with u ∈ C .In case A δ = { u } , let S u = { C ∈ V δ : u ∈ C large } , where we recall that C large is a box concentricwith C of side length 2 s C . We work on the event E δ,α , which by Lemma 3.4 is possible. Choose ι = C Mc /
3. Applying (43) of Lemma 3.7 to all dyadic boxes containing u with side length at least δ C mc (so in total we apply (43) O (log δ − ) times), we see that with high probability we have D (cid:48) γ,δ (cid:48) ( u, ∂ C large ) ≤ δ − ι ( δ/δ (cid:48) ) for all C ∈ S u . Let C start be the collection of all cells in geodesics of D (cid:48) γ,δ (cid:48) ( u, ∂ C large ) for all C ∈ S u . In the case A δ is a connected set of diameter at least δ ξ , let C start = ∅ . Similarly, we define C end .By (42) of Lemma 3.7 and a union bound, we see that with high probability E δ (cid:48) , C ,(cid:15),λ holds foreach C ∈ V δ , where (cid:15), λ are specified as in Lemma 3.7. In particular, in what follows we can assumethat E δ (cid:48) , C i ,(cid:15),λ holds for all i . Then, for each i , there exists a sequence, denoted C i , of neighboringcells in V δ (cid:48) such that | C i | ≤ λ/(cid:15) , C i encloses C i , and each cell in C i intersects with C i, large \ C i . Weclaim that ( ∪ di =1 C i ) ∪ C start ∪ C end contains a crossing between A δ and B δ . This is justified as follows:let i = max { i : C i encloses u } , and define recursively i r = max { i > i r − : C i intersects C i r − } till19 such that one can not define i r +1 , then A δ is connected to C start ∪ C i and respectively B δ to C end ∪ C i r . It follows that min u ∈ A δ ,v ∈ B δ D (cid:48) γ,δ (cid:48) ( u, v ) ≤ dλ/(cid:15) + 2 δ − ι ( δ/δ (cid:48) ) . (48)This completes the proof, noting that on E δ,α min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ ( x, y ) ≥ ξ/ (2 δ C Mc ) ≥ δ − ι . (49) Proof of Proposition 3.2.
We begin with the upper bound. The proof resembles that of Lemma 3.5,and the key technical ingredient is an analogue of Lemma 3.7. Let (cid:15) = max { − n : 2 n ≤ C mc log δ − } be as in Lemma 3.7, and let λ = e (log δ − ) . . We will show that there exists an event ˜ E δ,α which occurs with high probability such that for each dyadic box B with side length s = 2 − m ,1 ≤ m ≤ C mc log δ − , P ( { M γ,s ( B ) ≤ δ } ∩ ˜ E δ,α ∩ ( E (cid:48) δ,B,(cid:15),λ ) c ) ≤ δ C mc +10 , (50)where E (cid:48) δ,B,(cid:15),λ is as in Definition 3.6. Furthermore, we will show that for any fixed x ∈ B large andany fixed ι > P ( { M γ,s ( B ) ≤ δ } ∩ ˜ E δ,α ∩ { D γ,δ ( x, ∂B large ) > λδ − ι } ) ≤ δ ι/ . (51)Provided with (50) and (51), we can complete the proof for the upper bound following the sameargument as in the proof of Lemma 3.5. Note that in the case here,min u ∈ A δ ,v ∈ B δ D γ,δ ( u, v ) ≤ dλ/(cid:15) + 2 δ − ι λ ≤ de (log δ − ) . + δ − ι e (log δ − ) . ≤ de (log δ − ) . , where d = min u ∈ A δ ,v ∈ B δ D (cid:48) γ,δ ( u, v ) (compare with (42), (43), (48) and (49)). Thus, it remains toprove (50) and (51) (the proof resembles that of (42) and (43)).Let t = (cid:15) −(cid:100) (log δ − ) . (cid:101) , and with B ( B, (cid:15) ) = { B (cid:48) i } , set B (cid:48) i = B ∂ ( B (cid:48) i , t/(cid:15) ). Write K = 1 /(cid:15) . ByLemma 2.7, we have thatmax j ≥ max v ∈ V | ˜ h − j ( y ) − η − j ( y ) | = O ( (cid:112) log δ − ) , with high probability . (52)Let ˜ E δ,α := the intersection of E δ,α from (39) and the event described in (52). (53)Then, on { M γ,s ( B ) ≤ δ } ∩ ˜ E δ,α one has M γ ( ˜ B ) ≤ e αγ √ log δ − log log δ − × δ s − × ˜ M γ,(cid:15) s,η ( ˜ B ) (54)for any ˜ B ∈ B ( B, t ) ∪ B ∂ ( B large , t ). By Fubini’s Theorem, we have that E ˜ M γ,(cid:15) s,η ( ˜ B ) = ( ts ) . Thus, P ( M γ,(cid:15) s,η ( ˜ B ) > − s e − αγ √ log δ − log log δ − ) ≤ E ( ˜ M γ,(cid:15) s,η ( ˜ B ))4 − s e − αγ √ log δ − log log δ − ≤ t . . (55)Consequently, with E B (cid:48) i , open defined by E B (cid:48) i , open := { M γ,(cid:15) s,η ( ˜ B ) ≤ − s e − αγ √ log δ − log log δ − for all ˜ B ∈ B (cid:48) i } , |B (cid:48) i | ≤ t − , we have that P ( E cB (cid:48) i , open ) ≤ t . |B (cid:48) i | ≤ t . . On the one hand, (cid:40) P ( E B (cid:48) i , open ) ≥ − t . , {E B (cid:48) i , open , i ∈ I } and {E B (cid:48) i (cid:48) ,i (cid:48) ∈ I (cid:48) } are independent if | B (cid:48) i − B (cid:48) i (cid:48) | ≥ (cid:15)s for all i , i (cid:48) . On the other hand, consider the balls of radius radius ts centered at the corners of boxes in B (cid:48) i thatare on ∂B (cid:48) i . The collection of these 4 (cid:15)/t balls covers ∂B (cid:48) i . Note that each such ball can be covered byat most 4 boxes in B (cid:48) i . Thus, each one has LQG measure at most δ if { M γ,s ( B ) ≤ δ }∩ ˜ E δ,α ∩E B (cid:48) i , open occurs, by the definition of E B (cid:48) i , open together with (54). Therefore, we have that( { M γ,s ( B ) ≤ δ } ∩ ˜ E δ,α ∩ E B (cid:48) i , open ) ⊆ { Φ B (cid:48) i ,δ ≤ λ } , where we use that 4 (cid:15)/t ≤ e (log δ − ) . = λ . We can now apply the percolation argument as in theproof of Lemma 3.7, with parameters in (46) and (47) being set as p = t . and κ = 2 here. Then,we obtain that P ( { M γ,s ( B ) ≤ δ } ∩ ˜ E δ,α ∩ ( E (cid:48) δ,B,(cid:15),λ ) c ) ≤ K (8 p κ +1 ) K ≤ δ C mc +10 , completing the proof of (50).To prove (51), we take t = 2 −(cid:100) log ( δ − ι/ λ ) (cid:101) . Denote by ˜ B i , 0 < i ≤ /t the (closed) boxesin B ( B, t ) that intersect the horizontal line passing through x . Denote by { ˆ B j , j = 1 , . . . , (cid:96) } thecollection of ˜ B i ’s together with their neighboring boxes in B ( B, t ) ∪ B ∂ ( B large , t ), where (cid:96) ≤ /t + 6.Consider the balls centered at corners of some ˜ B i with radius ts . The collection of these 4 /t + 2balls covers a line segment from x to ∂B large , and each ball is covered by at most 4 boxes in { ˆ B j } .Consequently, on the event { M γ,s ( B ) ≤ δ } ∩ E δ,α , we have that D γ,δ ( x, ∂B large ) > δ − ι λ (note that δ − ι λ ≥ (4 /t + 2)) implies that M γ,(cid:15) s,η ( ˆ B j ) > − s e − αγ √ log δ − log log δ − for some j , recalling (54).This occurs with probability at most (6 /t + 6) t . ≤ δ ι/ , see (55). This completes the proof of(51).Next, we turn to the lower bound in Proposition 3.2. Let δ (cid:48) = δe (log δ − ) . . With this choice,events of high probability with respect to δ are also of high probability with respect to ˜ δ , and viceversa. Therefore, we do not distinguish between those notions. The key to the proof is the claimthat with high probability,every Euclidean ball with LQG-measure ≤ δ can be covered by 4 cells in V δ (cid:48) . (56)Provided with (56), it is clear that with high probability we have that D (cid:48) γ,δ (cid:48) ( u, v ) ≤ D γ,δ ( u, v ) for all u, v ∈ V . Combined with Lemma 3.5, it then yields the desired lower bound in the proposition.It remains to prove (56). Note that any Euclidean ball R of radius r can be covered by four closeddyadic boxes (which have non-empty pairwise intersection) of side length s = 2 min { − n : 2 − n ≥ r } .Suppose that R cannot be covered by four cells in V δ (cid:48) , which means at least one of these four dyadic21oxes B satisfies that M γ,s ( B ) > δ (cid:48) . Further, partition the box concentric with B of side length4 s into (4 × ) squares of side length s (cid:48) = 2 − s . Denote the partition by S B . Then, R containsat least one square from S B . Therefore, (56) would follow provided that with high probability,there exists no dyadic box B with M γ,s ( B ) > δ (cid:48) and M γ ( S ) ≤ δ for some S ∈ S B . (57)Let (cid:15) and α be as in Lemma 3.7, and recall that (cid:15) = inf { − n : 2 n ≤ C mc log δ − } . We will showthat for a fixed box B and a fixed square S ∈ S B , P ( ˜ E δ,α , ˜ E δ (cid:48) ,α , M γ,s ( B ) > δ (cid:48) , M γ ( S ) ≤ δ ) ≤ e − (2 C mc log δ − ) . (58)Assuming this, one can check (57), noting that the event there is not empty only for s ≥ ( δ (cid:48) ) C mc .Next, we are going to show (58). We work on the high probability events ˜ E δ,α and ˜ E δ (cid:48) ,α (see(53) for the definition). We partition S ∈ S B into K squares ˜ S , . . . , ˜ S K of side length (cid:15)s (cid:48) (recallthat K = 1 /(cid:15) ). Similarly to (54), we have that for all i , M γ ( ˜ S i ) ≥ e − αγ √ log δ − log log δ − × ( δ (cid:48) ) s − × ˜ M γ,(cid:15) s (cid:48) ,η ( ˜ S i ) , where ˜ M γ,(cid:15) s (cid:48) ,η is defined as in (29). Since (cid:15) s (cid:48) log (cid:15) s (cid:48) < (cid:15)s (cid:48) , one can find K / S i , · · · , ˜ S i K / such that ˜ M γ,(cid:15) s (cid:48) ,η ( ˜ S i j )’s are mutually independent. Then, M γ ( S ) ≤ δ implies that K / (cid:88) j =1 ( (cid:15)s (cid:48) ) − ˜ M γ,(cid:15) s (cid:48) ,η ( ˜ S i j ) ≤ e αγ √ log δ − log log δ − × ( δ/δ (cid:48) ) × s / ( (cid:15)s (cid:48) ) ≤ β K / , where β = e − (log δ − ) . . Let A j = { ( (cid:15)s (cid:48) ) − ˜ M γ,(cid:15) s (cid:48) ,η ( ˜ S i j ) > β } , which occurs with probability atleast 1 − βC γ, − ≥ / C γ, − ). Then P ( K / (cid:88) j =1 ˜ M γ,(cid:15) s (cid:48) ,η ( ˜ S i j )( (cid:15)s (cid:48) ) − ≤ β K ≤ P ( K / (cid:88) j =1 A j ≤ βK ≤ ( 1 + e − e β ) K / ≤ e − K , completing the proof of (58).It will be useful below to consider the Liouville graph distance when the “LQG” measure iscomputed using a perturbation of the GFF (such as the η -field). Explicitly, for any Borel set A define M ζγ ( A ) = lim n →∞ (cid:90) z e γζ − n ( z ) − γ E ( ζ − n ( z )) L ( dz ) , (59)where we will only work with fields ζ such that the above limit exists almost surely, including thewhite noise process ˜ h as in (11), and the η -process introduced in (13). For u, v ∈ V , we then definethe ζ -Liouville graph distance D γ,δ,ζ ( u, v ) to be the size of the smallest collection of Euclidean ballswith rational centers, each of M ζγ -measure at most δ , so that the collection contains a path from u to v . Lemma 3.8.
Suppose that two fields ζ (1) · ( · ) and ζ (2) · ( · ) are such that (59) is well defined for bothprocesses. In addition, assume that max v ∈ V max n ≥ | Var ζ (1)2 − n ( v ) − Var ζ (2) a − n ( v ) | ≤ b for some a, b > . (60)22 uppose there exists an instance of the two fields satisfying that max v ∈ V max n ≥ | ζ (1)2 − n ( v ) − ζ (2) a − n ( v ) | ≤ b for some b > . (61) Then, on this instance we have for all u, v ∈ V D γ,δe γ b / γb / ,ζ (2) ( u, v ) ≤ D γ,δ,ζ (1) ( u, v ) ≤ D γ,δe − γ b / − γb / ,ζ (2) ( u, v ) for all δ > . Proof.
We see from (60) and (61) that e − γ b / − γb M γ,ζ (2) ( A ) ≤ M γ,ζ (1) ( A ) ≤ e γ b / γb M γ,ζ (2) ( A )for any Borel set A ⊆ V . This implies that D γ,δ,ζ (1) ( u, v ) ≤ D γ,δe − γ b / − γb / ,ζ (2) ( u, v ) for thereason that any Euclidean ball with M γ,ζ (2) -LQG measure at most [ δe − γ b / − γb / ] has M γ,ζ (1) -LQG measure at most δ . The other inequality follows from the same reasoning.Recall the definition of C Mc in Lemma 3.1. Corollary 3.9.
For any fixed < ξ < C Mc / , any function δ (cid:48) = δ (cid:48) ( δ ) ∈ (0 , δ ) and any sequence of ξ -admissible pairs ( A δ , B δ ) , we have with high probability that min x ∈ A δ ,y ∈ B δ D γ,δ (cid:48) ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) · e (log δ − ) . ( δ/δ (cid:48) ) . Furthermore, the statement holds with D γ,δ replaced by D γ,δ,η .Proof. The statement on D γ,δ follows immediately from Proposition 3.2 and Lemma 3.5. Thestatement on D γ,δ,η then follows additionally from Lemma 2.7 and Lemma 3.8. Lemma 3.10.
For any fixed < ξ < C Mc / , any δ > and any sequence of ξ -admissible pairs ( A δ , B δ ) , we have with high probability e − δ − ) . min u ∈ A δ ,v ∈ B δ D γ,δ,η ( u, v ) ≤ min u ∈ A δ ,v ∈ B δ D γ,δ ( u, v ) ≤ e δ − ) . min u ∈ A δ ,v ∈ B δ D γ,δ,η ( u, v )(62) Furthermore, | E min u ∈ A δ ,v ∈ B δ log D γ,δ ( u, v ) − E min u ∈ A δ ,v ∈ B δ log D γ,δ,η ( u, v ) | = O ((log δ − ) . ) . (63) Proof.
The estimate (62) follows from Lemma 2.7, Lemma 3.8 and Corollary 3.9. The estimate (63)follows from (62) combined with (38) (and an analogous version for D γ,δ,η which can be derived inthe same manner), and an application of the Cauchy-Schwarz inequality. The goal of this section is to prove a version of regularity for the random partition, as incorporatedin Lemma 3.12. As a consequence, we obtain Lemma 3.13, which will play a crucial role in provingthe lower bound on the Liouville heat kernel and Lemma 5.3. For α ∗ , δ >
0, set (cid:15) ∗ = (cid:15) ∗ δ = max { − n : 2 − n ≤ exp {− α ∗ (cid:112) log δ − log log δ − }} . (64)23 efinition 3.11. Let B = ( B , . . . , B d ) denote a sequence of neighboring boxes, and write B := B and B d +1 := B d . For i = 1 , . . . , d , we say that B i is good (in B ) if s B i − , s B i +1 ∈ [ s B i (cid:15) ∗ , s B i /(cid:15) ∗ ] .We say that B is a good sequence if all B i ’s are good in B . We say that a point x is good if for anycell C ∈ V δ such that x ∈ C large , one has that for any w ∈ C large , the side length of C w,δ satisfies s w,δ ≥ (cid:15) ∗ s C . Let E δ,α ∗ ,u,v := E δ,α ∗ ∩ { u and v are good, and there exists a good sequence of cells C , . . . , C d joining u and v with d ≤ D (cid:48) γ,δ ( u, v ) e (log δ − ) . } . (65)Note that E δ,α ∗ ,u,v is measurable with respect to V δ . Recall the constant α from Lemma 3.4. Lemma 3.12.
There exists α ∗ = α ∗ ( γ ) ≥ α so that, for any fixed u, v ∈ V , we have P ( E δ,α ∗ ,u,v ) ≥ − e − (log δ − ) / . In the rest of the paper, we will stick to the choice of α ∗ so that the conclusion of Lemma 3.12holds. The following lemma clarifies the notion of goodness encoded in the definition of E δ,α ∗ ,u,v .In the statement, we do not distinguish between V δ and the filtration generated by it. Lemma 3.13.
On the event E δ,α ∗ ,u,v , there exists a sequence, measurable with respect to V δ , ofneighboring dyadic boxes B , . . . , B d joining u, v with d ≤ D (cid:48) γ,δ ( u, v ) e δ − ) . , such that each B i is contained in some cell C with s B i = s C ( (cid:15) ∗ ) . Furthermore, the law of { η s Bi δ (cid:48) ( x ) : δ (cid:48) < s B i , x ∈ ( B i ) large , i = 1 , . . . , d } conditioned on V δ coincides with its unconditional version. Explicitly, forany measurable function F , E u,v,δ,α ∗ E ( F ( { η s Bi δ (cid:48) ( x ) : δ (cid:48) < s x,δ (cid:15) ∗ , x ∈ ∪ di =1 B i } ) | V δ ) = E u,v,δ,α ∗ ϕ F ( f ) , where ϕ F ( g ) := E ( F ( { η g ( i ) δ (cid:48) ( x ) : δ (cid:48) < s i , x ∈ ( B i ) large , i = 1 , . . . , d )) , and f ( i ) = s B i . (Recall that the collection of random variables { s v,δ } is measurable with respect to V δ .) Proof. On E δ,α ∗ ,u,v , one can find a good sequence C = ( C , . . . , C d ) joining u and v with d ≤ D (cid:48) γ,δ ( u, v ) e (log δ − ) . . Denote Λ j = ∂ C j ∩ ∂ C j +1 , let x j denote the middle of Λ j , and let C j denote thepartition of C j into boxes of side length ( (cid:15) ∗ ) s C j . Since C is good, one can find for each 2 ≤ j ≤ d −
1a sequence of boxes { B j,i } ’s in C j joining x j − and x j such that each B j,i has distance at least (cid:15) ∗ s C j from ∂ C j \ (Λ j − ∪ Λ j ). Let { B ,i } be an arbitrary sequence of boxes in C joining u and x , anddefine similarly { B d ,i } . To ensure connectivity, the boxes in C j whose closures contain x j − or x j are all collected in B j,i ’s. Now it suffices to check the requirement of conditional law for the sequence ∪ j { B j,i } . Note on E u,v,δ,α ∗ , one has s C j ≥ δ C mc thus s B j,i log s Bj,i = ( (cid:15) ∗ ) s C j log (cid:15) ∗ ) s C j < (cid:15) ∗ s C j .Combined with the fact that C is good, this implies thatthe construction of V δ does not explore the white noise (66)appearing in { η s Bj,i δ (cid:48) ( x ) : δ (cid:48) < s B j,i , x ∈ ( B j,i ) large } ,completing the proof. 24he main task for the rest of the section is to prove Lemma 3.12. We will employ a percolation-type analysis of the same flavor as in the proof of Lemma 3.5 and Proposition 3.2. However, thepercolation argument employed here is substantially more involved as we are required to controlthe ratios for the sizes of neighboring cells in the short path we find (by deforming the geodesic). Definition 3.14 ( E δ,B ) . Let B denote a dyadic box and fix δ > . We define the event E δ,B to bethe following: there exists a sequence of neighboring boxes B , . . . B d ⊆ B large \ B enclosing B suchthat • B i ∈ B ( B, (cid:15) ∗ ) for each ≤ i ≤ d , where B ( B, (cid:15) ∗ ) is as in Definition 3.6. • M γ,(cid:15) ∗ s B ( B i ) ≤ δ for each ≤ i ≤ d . Remark 3.15.
We note that the sequence B , . . . , B d does not necessarily consists of cells in V δ .However, each of the B i s must be contained in a (possibly larger) cell, which intersects B large \ B . Lemma 3.16.
The following holds for large enough < α < α ∗ = α ∗ ( γ ) : for each dyadic box B with side length s = s B = 2 − m , ≤ m ≤ C mc log δ − , we have P ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ E cδ,B ) ≤ δ C mc +10 . (67) Furthermore, P ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ { M γ,(cid:15) ∗ s ( B (cid:48) ) > δ for some B (cid:48) ∈ B ( B, (cid:15) ∗ ) } ) ≤ e − √ log δ − . (68)(Note that α ∗ enters in the statement of Lemma 3.16 through the definition of (cid:15) ∗ , see (64).) Proof.
The proof resembles that for Lemma 3.7. Let (cid:15) = ( α log δ − ) − be dyadic with α ∈ [1 , α > max(2 , α , C mc), see Lemma 3.7. Let t = (cid:15) ∗ . Recallthat B ( B, (cid:15) ) denotes the partition of B large into small boxes B (cid:48) i ’s of side length (cid:15)s , and that B (cid:48) i = B ∂ ( B i , t/(cid:15) ) denotes the boxes of side length (cid:15) ∗ s whose closures intersect ∂B i .Replacing η (cid:15) sts in the proof of Lemma 3.7 with η (cid:15)s/ log s − ts , we obtain M γ,(cid:15) ∗ s ( ˜ B ) ≤ δ e γα √ log δ − log log δ − ( (cid:15) ∗ ) e γη (cid:15)s/ (log s − (cid:15) ∗ s ( c ˜ B ) − γ Var( η (cid:15)s/ (log s − (cid:15) ∗ s ( c ˜ B )) , (69)where c ˜ B denotes the center of ˜ B ∈ B ( B, (cid:15) ∗ ) ∪ B ∂ ( B large , (cid:15) ∗ ). Analogously, for appropriate choicesof α, α ∗ , we have that E B (cid:48) i , open := { max ˜ B ∈B (cid:48) i η (cid:15)s/ (log s − ) ts ( c ˜ B ) ≤ . t − } satisfies (46) with p = t . and κ = 2, and ( { M γ,s ( B ) ≤ δ } ∩ E δ,α ∩ E B (cid:48) i , open ) ⊆ { M γ,s(cid:15) ∗ ( ˜ B ) ≤ δ : ˜ B ∈ B (cid:48) i } , (recall (45)). Then, the percolation argument in Lemma 3.7 yields (67).It remains to prove (68). By a union bound, we see that P ( max ˜ B ∈B ( B,(cid:15) ∗ ) η (cid:15)s/ (log s − ) (cid:15) ∗ s ( c ˜ B ) ≤ (1 + 1 γ + γ /(cid:15) ∗ )) ≥ − ( (cid:15) ∗ ) Ω( γ ) , (70)where the choice of 1 + γ + γ is so that 1 + γ + γ > γ (1 + γ + γ ) < γ . Combining thiswith (69) completes the proof of (68). 25 roof of Lemma 3.12. We work on the event E δ,α , since by Lemma 3.4 it occurs with high prob-ability. Applying (68) to all dyadic boxes B with B large containing u or v , and with side length s B ≥ δ C mc (so in total we apply (68) O (log δ − ) times), we see that u and v are good with probabilityat least 1 − O (log δ − ) e − √ log δ − . Also, by (67) and a union bound, we see that with high probability E δ, C holds for each C ∈ V δ . Recalling Remark 3.15, we then get that with high probabilitythere exists a sequence of neighboring cells with side length at least (cid:15) ∗ s C which encloses C and which has all cells intersecting with C large \ C for each C ∈ V δ . (71)Let C = ( C , . . . , C d ) be the geodesics in D (cid:48) γ,δ joining u and v . We will show that (71) and theassumption that u and v are good imply that E δ,α ∗ ,u,v holds, that is thatone can find a good sequence of cells joining u and v with length at most d e (log δ − ) . . (72)Since E δ,α ∗ ,u,v is increasing in α ∗ , one can adjust α ∗ such that it is larger than α , completing theproof of the lemma.It remains to prove (72), assuming that E δ,α holds, u and v are good, and (71). For a sequenceof neighboring cells C , we let ψ ( C ) be the collection of cells C ∈ C which have a neighboring cell in C with side length less than (cid:15) ∗ s C (that is to say, C is a not a good cell in C as in Definition 3.11, whichwe refer to as a bad cell). Let q ( C ) be the side length of the largest cell in ψ ( C ). For C , C (cid:48) ∈ C , wedenote by [ C , C (cid:48) ] C the path in C connecting C and C (cid:48) , and by ( C , C (cid:48) ) C the interior of [ C , C (cid:48) ] C (i.e.,excluding C and C (cid:48) ). Similarly, we have [ C , C (cid:48) ) C and ( C , C (cid:48) ] C .For i ≥
0, we will employ the following iterative construction, constructing C i +1 from C i . If C i isnot good, we pick the largest C ∈ ψ ( C i ). Since u and v are good, we see that u, v / ∈ C large and thus C i will have to enter from outside and also exit from ∂ C large — here naturally C should implicitlydepend on i , but we have suppressed it in the notation for simplicity. Let C enter be the last cell in C i before C which intersects ∂ C large and let C exit be the next cell in C i after C that intersects ∂ C large .We claim that there always exists a sequence of neighboring cells C i, replace (which is a segmentof (71)) joining C i, ∈ [ C enter , C ) and C i, ∈ ( C , C exit ] such that if we construct C i +1 by replacing[ C i, , C i, ] C i in C i with C i, replace , then either of the following occurs:(i) | ψ ( C i +1 ) | ≤ | ψ ( C i ) | − | ψ ( C i +1 ) | ≤ | ψ ( C i ) | and q ( C i +1 ) ≥ q ( C i ).Provided with this claim, we can then construct iteratively C i +1 , and we see that in every C mc log δ − steps the number of bad cells has to decrease by at least 1 (this is because the second scenario can-not occur continuously for more than C mc log δ − steps due to the fact that all cells have sizebetween δ C mc and 1). Thus, the iterative procedure will stop after at most d × C mc log δ − stepsand end up with a good sequence. Also, in every step, the number of cells increases by at most4( (cid:15) ∗ ) − . Therefore, in the end, we obtain a good sequence of neighboring cells with length at most d × ( (cid:15) ∗ ) − C mc log δ − ≤ d e (log δ − ) . , as required. That is, (72) holds.It remains to justify the above claim. We first prove it in the harder case when C large ⊆ V o . Asshown in (a) of Figure 1, let B lt , B rt , B lb , B rb be the four dyadic boxes with side length 2 s C whoseclosures have non-empty intersection with the closure of C ; here, the subscript lt means “left-top’and rb means “right-bottom”, etc. (Note that B lt , B rt , B lb , B rb are not necessarily cells in V δ .) Wesuppose without loss of generality that C ⊂ B r b (so that all cells in B rb have side length at most s C ).26 Case 2 Case 1 Case 3 ( a ) ( b ) Figure 1: (a) The smallest (red) box is C , the intermediate (black) boxes are B lt , etc, and the largest(blue) box is C lb . (b) lack of connectivity by C lb and C rt , where the small (black) solid boxes are C enter and C exit . In Case 1, the small (red) solid boxes are C i, and C i, , the thin (red) curve standsfor [ C i, , C i, ] and the thick (blue) curve stands for C i, replace . In Case 2, the small bottom (purple)solid box stands for the neighbor of C i, = C lb in C i, replace , which may have side length less than (cid:15) ∗ s C i, .If B lt is not partitioned, then denote by C lt the cell containing B lt (otherwise we define C lt = ∅ ) —similarly for lb , rt , rb. Let C parents = { C lt , C rt , C lb } , noting C rb = ∅ . Note that it is possible that C parents = {∅} . By (71) there exists a sequence C i, cross of neighboring cells with side length at least (cid:15) ∗ s C , which encloses C and has all cells intersecting with C large \ C . Suppose that C i, cross intersects[ C enter , C exit ] C i at C i, and C i, . Then, C i, cross can be split two segments, with respective ending cells C i, and C i, .We first show that the interior of one of the segments does not intersect C parents . Suppose thisdoes not hold. If C lt lies in the interior of a segment, neither C lb nor C rt lie in the interior of theother segment, because they are neighbors of C lt . Then, C lb and C rt respectively lie in the interiorof different segments, as shown in (b) of Figure 1. By connectivity, this implies that one of themis contained in ( C i, , C i, ) C i ⊆ ( C enter , C exit ) C i , arriving at a contradiction to the definitions of C enter and C exit .Next, we prove our claim in the following separate cases, as shown in Figure 1. Case 1: C i, , C i, / ∈ C parents . In this case we can just let C i, replace be the segment which does notcontain any cell in C parents . By our assumption, we see that all cells in C i, replace have side lengthsin [ (cid:15) ∗ s, s ]. Therefore, ψ ( C i, replace ) = ∅ . In addition, C / ∈ C i, replace . Thus, we have justified (i) of theclaim. Case 2: |{ C i, , C i, } ∩ C parents | = 1 . In this case, we repeat the procedure as in Case 1. However,(supposing C i, ∈ C parents ) it is now possible that ψ ( C i, replace ) = ∅ or ψ ( C i, replace ) = { C i, } . Theformer case shows (i); in the latter case, we have (ii), where q ( C i +1 ) = s C i, ≥ s C = 2 q ( C i ). Case 3: { C i, , C i, } ⊂ C parents . In this case, we also have C i, = C enter and C i, = C exit (or with theordering switched), and thus both C i, and C i, are neighboring to (in the sequence C i ) cells of sidelength at most s C . By maximality of C in ψ ( C i ), we see that C i, and C i, have side lengths at most s C /(cid:15) ∗ (and at least 2 s C since they are in C parents ). If C i, and C i, are diagonal to each other (thenthey must be both neighboring C ), we let C i, replace be the sequence C i, , C , C i, ; if C i, and C i, areneighboring to each other, then we let C i, replace be the sequence C i, , C i, . In both cases, we have ψ ( C i, replace ) = ∅ , justifying (i). 27e next consider the easier case that C intersects ∂ V . In this case, C parents contains at most onecell in V and we are either in Case 1 or Case 2. Following similar (and slightly simpler) analysisto the one above then yields the proof of the claim in this case. Altogether, this completes theverification of the claim, and thus completes the proof of the lemma. In this section we show the following concentration result on the Liouville graph distance. Recallthe constant C Mc specified in Lemma 3.1. Proposition 3.17.
For any fixed < ξ < C Mc / there exists a constant c = c ( γ, ξ ) so that for anysequence of ξ -admissible pairs ( A δ , B δ ) we have that for any ι ∈ (0 , , | log min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) − E log min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) | ≤ ι log δ − with c · ι -high probability . (73) In addition, with probability at least − e − (log δ − ) . , we have that | log min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) − E log min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) | ≤ (log δ − ) . . (74) Furthermore, (73) and (74) hold with D γ,δ replaced with D γ,δ,η .Proof. We first give a detailed proof of (73) and then sketch the necessary minor adaptations neededin order to obtain (74). For both (73) and (74), we will only provide a proof in the case of A = { u } and B = { v } , as the general case follows by the same proof with minimal change — the assumptionof admissible pairs is required only in order to be able to apply Proposition 3.2 and Lemma 3.5.Also, provided with (73) and (74), the fact that (73) and (74) hold with D γ,δ replaced with D γ,δ,η follows from Lemma 2.7, Corollary 3.9 and Lemma 3.8. Proof of (73). It is obvious from Proposition 3.2 and Corollary 3.3 that (73) is equivalent to thestatement that with c · ι -high probability | log D (cid:48) γ,δ ( u, v )log δ − − E log D (cid:48) γ,δ ( u, v )log δ − | ≤ ι . (75)Thus, it suffices to prove the concentration for either of the two distances. The natural attempt toprove Proposition 3.17 is to verify the Lipschitz condition for the Liouville graph distance (viewedas a function on a Gaussian process) and then apply a Gaussian concentration inequality. However,while the Lipschitz condition for the Liouville graph distance can be verified, the maximal individ-ual variance for the Gaussian variables involved in the definition of the Liouville graph distance isinfinite. On the other hand, while the maximal individual variance for the Gaussian variables in-volved in the definition of the approximate Liouville graph distance can be controlled, the Lipschitzcondition does not hold in an obvious way. In order to see the failing of the Lipschitz condition,note that one can perturb the Gaussian process such that in constructing V δ , a cell that was notfurther partitioned in the original environment would now be further partitioned. Once this extrapartitioning occurs, it is possible (but unlikely) that these sub-cells would be further partitionedinto arbitrarily small Euclidean squares. (Indeed, the decision concerning further partitioning de-pends on random variables which are independent from those determining the original partition.)28n order to address this issue, we will employ the Lipschitz condition for the Liouville graph dis-tance and the control on the maximal individual variance for the Gaussian variables involved inthe approximate Liouville graph distance, and use Proposition 3.2 to make a connection betweenthese two distances.We consider the Gaussian space generated by the collection { ( η δ ( v ) , ˜ h δ ( v )) } v ∈ V ,δ> , see (11) and(13). For δ >
0, let X δ denote the subspace spanned by { ( η (cid:15) ( v ) , ˜ h (cid:15) ( v )) : v ∈ V , (cid:15) ≥ δ C mc } . Let Y δ denote the subspace orthogonal to X δ , and note that it is generated by the white noise W ( dw, ds )for s < δ C mc ). For δ (cid:48) < δ C mc we write the orthogonal decomposition( η δ (cid:48) ( · ) , ˜ h δ (cid:48) ( · )) = ( η δ C mc ( · ) , ˜ h δ C mc ( · )) + ( η ⊥ δ,δ (cid:48) ( · ) , ˜ h ⊥ δ,δ (cid:48) ( · )) =: X δ + Y δ,δ (cid:48) , where Y δ,δ (cid:48) ( · ) is measurable on Y δ . (Possible configurations of X and Y will be denoted by x and y .We use x δ and y δ,δ (cid:48) as convenient shorthand notation, and we further use y δ to denote the collection y δ,δ (cid:48) for δ (cid:48) < δ C mc .) Denote by M γ, x δ the LQG measure of the GFF on the realization ( x δ , Y δ ). Weapply a similar convention for M γ, x (cid:48) δ , D γ,δ, x δ ( u, v ), etc. We note that, by definition, D (cid:48) γ,δ ( u, v ) = D (cid:48) γ,δ, X δ ( u, v ). Furthermore, D (cid:48) γ,δ, x δ ( u, v ) is a real number if each cell has side length larger than δ C mc , since then D (cid:48) γ,δ does not depend on Y δ . Next, we are going to show that log D (cid:48) γ,δ, x δ ( u, v ) − log D (cid:48) γ,δ, x (cid:48) δ ( u, v ) is bounded by O (1) (cid:107) x δ − x (cid:48) δ (cid:107) ∞ , see (76) below.Let A δ be such that {X δ ∈ A δ } = { each cell in V δ has side length at least δ C mc } . Let ι be anarbitrarily small positive number and α >
0, and let E ∗ δ,ι,α = { ( X δ , Y δ ) ∈ ˜ E ∗ δ,ι,α } be the event suchthat | log D γ,δ (cid:48) ( u, v )log δ − − log D (cid:48) γ,δ (cid:48)(cid:48) ( u, v )log δ − | ≤ ι/ δ ι/α ≤ δ (cid:48) , δ (cid:48)(cid:48) ≤ δ − ι/α . It will be convenient in what follows to write E ∗ δ,ι,α, x δ = { y δ : ( x δ , y δ ) ∈ ˜ E ∗ δ,ι,α } . By Proposition 3.2 and Lemmas 3.1 and 3.5, we can choose an α > γ , suchthat for any arbitrarily small ι > E ∗ δ,ι,α occurs with c · ι -high probability. As a result, we see thatthere exists a set A ⊆ A δ such that P ( E ∗ δ,ι,α | X δ ) ≥ . X δ ∈ A , which occurs with c · ι -high probability . In particular, for x δ , x (cid:48) δ ∈ A , E ∗ δ,ι,α, x δ ∩ E ∗ δ,ι,α, x (cid:48) δ is non-empty.Let (cid:96) = (cid:107) x δ − x (cid:48) δ (cid:107) ∞ . We see from Lemma 3.8 that as long as (cid:96) ≤ (cid:96) δ = ι log δ − γα we have D γ,δ − ι/α , x δ ( u, v ) ≤ D γ,δ, x (cid:48) δ ( u, v ) ≤ D γ,δ ι/α , x δ ( u, v ) . (Note that the above is an inequality between random variables that depend on Y δ , which holdsfor almost all configurations y δ .) Consequently, on the event Y δ ∈ E ∗ δ,ι,α, x δ ∩ E ∗ δ,ι,α, x (cid:48) δ we have | log D γ,δ, x (cid:48) δ ( u, v ) − log D γ,δ, x δ ( u, v ) | ≤ ι log δ − and thus, | log D (cid:48) γ,δ, x (cid:48) δ ( u, v ) − log D (cid:48) γ,δ, x δ ( u, v ) | ≤ ι log δ − . (76)Recall that, for all x δ , D (cid:48) γ,δ, x δ does not depend on Y δ . Then, we have deduced that (76) holds forall x δ , x (cid:48) δ ∈ A satisfying (cid:96) ≤ (cid:96) δ . 29t this point, we are ready to deduce our concentration result. Let d (cid:48) u,v be the minimal numbersuch that P ( X δ ∈ A (cid:48) ) ≥ / , where A (cid:48) = { x δ ∈ A : D (cid:48) γ,δ, x δ ( u, v ) ≤ d (cid:48) u,v } . Note that the above is well defined since when x δ ∈ A , we have that D (cid:48) γ,δ, x δ ( u, v ) is a measurablefunction of x δ . Recalling (76), we see that for c = c ( γ ) > P (log D (cid:48) γ,δ ( u, v ) ≥ log d (cid:48) u,v + ι log δ − ) ≤ P ( X δ (cid:54)∈ A ) + P ( min x (cid:48) δ ∈A (cid:48) (cid:107)X δ − x (cid:48) δ (cid:107) ∞ ≥ (cid:96) δ ) ≤ δ cι , (77)where in the last step we have used Lemma 2.1, as well as the fact that maximal individual varianceof the random variables in X δ is O C mc (log δ − ).By a similar reasoning, we can also get that P (log D (cid:48) γ,δ ( u, v ) ≤ log d (cid:48) u,v − ι log δ − ) ≤ δ cι . (78)Due to the uniform square integrability of log D (cid:48) γ,δ ( u, v ) / log(1 /δ ), which follows from | D (cid:48) γ,δ | ≤|V δ | and the reasoning in Lemma 3.1, we conclude from (77) and (78) that | E log D (cid:48) γ,δ ( u, v ) − log d (cid:48) u,v | ≤ ι log δ − . Combined with (77) and (78), this completes the proof of (75) (we adjust thevalue of ι appropriately). Proof of (74). We now sketch the necessary modifications in order to prove (74). For simplicityof exposition, in what follows we will repeatedly use higher powers of log δ − to absorb error termswith lower powers of log δ − . It is obvious from Proposition 3.2 and Corollary 3.3 that (74) can bededuced from the statement that with probability at least 1 − e (log δ − ) . , | log D (cid:48) γ,δ ( u, v ) − E log D (cid:48) γ,δ ( u, v ) | ≤ (log δ − ) . . (79)To prove (79), we follow the proof of (73), but in place of E ∗ δ,ι,α we define E ∗ δ,α to be the event that | log D γ,δ (cid:48) ( u, v )log δ − − log D (cid:48) γ,δ (cid:48)(cid:48) ( u, v )log δ − | ≤ (log δ − ) − . for all δe − α − (log δ − ) . ≤ δ (cid:48) , δ (cid:48)(cid:48) ≤ δe α − (log δ − ) . . By Proposition 3.2 and Lemmas 3.1 and 3.5, we can choose an α > γ , suchthat P ( E ∗ δ,α ) ≥ − δ /α . As a result, we see that there exists a set A ⊆ A δ such that P ( E ∗ δ,α | X δ ) ≥ . X δ ∈ A , and P ( X δ ∈ A ) ≥ − δ α . At this point, we can repeat the analysis as for (73) and deduce that for (cid:96) δ = (log δ − ) . P (log D (cid:48) γ,δ ( u, v ) ≥ log d (cid:48) u,v +(log δ − ) . ) ≤ P ( X δ (cid:54)∈ A )+ P ( min x (cid:48) δ ∈A (cid:48) (cid:107)X δ − x (cid:48) δ (cid:107) ∞ ≥ (cid:96) δ ) ≤ e − Ω((log δ − ) . ) , where in the last step we again have used Lemma 2.1, as well as the fact that the maximal individualvariance of the random variables in X δ is O C mc (log δ − ). The proof of the lower deviation in (79)is similar, leading to (79) and thus completing the proof of (74). In this section, we relate the Liouville heat kernel to the Liouville graph distance.30 .1 Lower bound
In this section, we provide a lower bound on the Liouville heat kernel in terms of the Liouvillegraph distance. For u, v ∈ V , we denote χ + u,v = lim sup δ → E log D γ,δ ( u, v )log δ − . Recalling Lemma 2.12, we see that 0 < χ + u,v <
1. We will show that there exists a finite randomvariable
C > u, v ) such that for all t ∈ (0 , p γt ( u, v ) ≥ C exp (cid:110) − t − χ + u,v − χ + u,v + o (1) (cid:111) . (80)In order to prove (80), it suffices to show that there exists a t > ι >
0, there exists a small positive random variable c = c γ,u,v,ι > t ∈ (0 , t ], the following holds: with probability at least1 − e − (log t − ) . , p γs ( u, v ) ≥ c exp (cid:110) − t − χ + u,v − χ + u,v − ι (cid:111) for all t ≤ s ≤ t . (81)Indeed, (81) yields (80) for t ≤ t by an application of the Borel-Cantelli lemma for times t i = 2 − i .On the other hand, (80) holds for t > t by the Markov property and multiple applications of [25,Corollary 5.20].To show (81), fix an arbitrarily small ι > δ = t / (2 − χ + u,v )+ ι . Also, throughout thesection, we use ˆ C to denote a cell in V δ , while C will stand for the boxes { B i } in Lemma 3.13.A natural approach to proving (81) is to show that with not too small probability, the LiouvilleBrownian motion can cross each cell in V δ without accumulating too much “Liouville time” (i.e.,the PCAF as defined in (1)), provided with which one can then force the SBM to travel along thegeodesic between u and v in V δ . However, there is a substantial obstacle due to the the possibilitythat two neighboring cells along the geodesic may have side lengths differing by a factor as large asa power in δ . This is further complicated by a technical challenge: for a cell ˆ C ∈ V δ , the Liouvilletime accumulated during traveling through ˆ C depends on the starting and ending points, and wedo not expect uniform bounds on that.We now discuss how to address these challenges; a crucial role is played by Lemma 3.13. Wework on the event E defined as E = E δ,α ∗ ∩ E δ,α ∗ ,u,v ∩ { (52) holds } , (82)where E δ,α ∗ and E δ,α ∗ ,u,v are defined in (39) and (65), respectively. Note that P ( E ) ≥ − e − (log δ − ) . , by Lemmas 3.4, 3.12 and the discussion above (52). We next will extract a sequenceof neighboring boxes using Lemma 3.13. To ensure more desirable properties of this sequence ofboxes, we will work on a more restricted event than E . By Propositions 3.2 and 3.17, we see thatwith c · ι -high probability, D (cid:48) δ,γ ( u, v ) ≤ δ − χ + u,v − (2 − χ + u,v ) · ι . Setting E = E ∩ { D (cid:48) δ,γ ( u, v ) ≤ δ − χ + u,v − (2 − χ + u,v ) · ι } ∩ { the event in (84) } , (83)31e deduce, using arguments similar to those employed in the proof of Lemma 3.4, that with highprobability we have max ˆ C ∈V δ max x ∈ V ∩ ˆ C large | η s ˆ C ( (cid:15) ∗ ) s ˆ C ( x ) | ≤ (log δ − ) . . (84)and therefore, P ( E ) ≥ − e − (log δ − ) . .We work on E in what follows. Denote the sequence { B i } provided in Lemma 3.13 by C =( C , . . . , C d ); recall that this sequence is measurable with respect to V δ , and joins u to v . Then, d ≤ δ − χ + u,v − (2 − χ + u,v ) · ι , and each C i satisfies M γ,s C i ( C i ) ≤ δ e O ((log δ − ) . ) (recall (84) and that s C i = ( (cid:15) ∗ ) s ˆ C i , where ˆ C i is the cell containing C i , see Lemma 3.13). Furthermore, the law of { η s C i δ (cid:48) ( x ) : δ (cid:48) < s C i , x ∈ ( C i ) large for some C i ∈ C} conditioned on V δ coincides with its unconditionalversion. (Here, we abuse notation by using C i to denote a dyadic box which is not necessarily acell. The abuse of notation is justified by the fact that M γ,s C i ( C i ) ≤ δ e O ((log δ − ) . ) and thusthe C i ’s will essentially play the role of cells.) For i = 1 , . . . , d −
1, denote for brevity s i := s C i and write Λ i = ∂ C i ∩ ∂ C i +1 . We emphasize that the Λ i ’s are measurable with respect to V δ . Asdiscussed above, we will force the SBM to travel through C , . . . , C d sequentially, and will showthat this occurs with high enough probability. To this end, we will crucially use the fact C is agood sequence, and thus L (Λ i ) , L (Λ ( i − ∨ ) ≥ (cid:15) ∗ s i for 1 ≤ i < d . (85)Here L is the 1-dimensional Lebesgue measure, and (cid:15) ∗ is defined in (64).Consider 2 ≤ i ≤ d −
1. For C i ∈ C and z ∈ C i , we say that z is a fast point (with respect to C i ) if for any Λ ⊆ Λ i such that L (Λ) ≥ . L (Λ i ) one has P z ( F ( σ Λ ) ≤ δ C δ ) ≥ exp {− exp { (log δ − ) / }} =: p fast (= exp {− (1 /δ ) o (1) } ) , (86)where σ Λ is the first time when the SBM hits Λ and C δ = exp { (log δ − ) . } (= (1 /δ ) o (1) ) . (87)Note that we allow z ∈ ∂ C i , however the fact that being fast involves considering all possible Λmakes the notion non-trivial even for a point z ∈ ∂ C i , since we need to consider sets Λ with z (cid:54)∈ Λ.We say that C i is fast if L (Λ i − , fast ) ≥ . L (Λ i − ) where Λ i − , fast = { z ∈ Λ i − : z is fast with respect to C i } . (88)A crucial ingredient for the proof of (81) is the proof that with high probability all the C i ’s are fastsimultaneously. To this end, we now estimate the probability that a particular C i is fast. (We willlater apply a union bound.) Lemma 4.1.
There exists a δ > such that for all δ < δ there exists an event E of highprobability such that the following holds. For each ≤ i ≤ d − there exists an event E C i such that P ( E C i | V δ ) ≥ − exp {− √ log δ − } and ( E C i ∩ E ) ⊂ { C i is fast } . (The event E is defined in (92) below.)We begin our preparation for the proof of Lemma 4.1. Since our goal is to show that with veryhigh probability C i ∈ C is fast, a first (or second) moment computation will not be enough. Instead,32e will use a simple multi-scale analysis and employ a percolation argument. We first introducesome definitions. Set k = (cid:98) (log δ − ) . (cid:99) and K = 2 k . Take C ∈ C and partition it into K manydyadic squares with side length s C /K . Denote the collection of these boxes by B C , and denote by B large (respectively, B Large ) the boxes concentric with B but with double (respectively, triple) sidelength. For a fixed B ∈ B C and z ∈ B , we say that z is a pre-fast point with respect to the box B if for any Λ ⊆ ∂ B with L (Λ) ≥ − s C /K one has P z ( F ( σ Λ ) ≤ δ C δ K − ; σ Λ ≤ σ ∂ B Large ) ≥ exp {− (log δ − ) / } . (89)Note that the notions of pre-fast and fast are related, but one does not necessarily imply the other.We say that B is pre-fast if the subset of pre-fast points with respect to B on ∂ B has 1-dimensionalLebesgue measure at least (1 − − ) L ( ∂ B ). By definition, the property of boxes being pre-fasthas long range correlation, though we expect that the correlation decays quickly. Figure 2: In the left picture, the (black) boxes stand for (a piece of) the sequence of good cellsjoining u and v . The (red) line stands for the good sequence of boxes { B i } in Lemma 3.13, whichare denoted by { C i } now. The right picture is a zoom in, where the big (red) box is C = C i , andthe small (blue) boxes form B C .In order to control the correlation, we define a field ˜ η B := { ˜ η B ,s C (cid:15) (cid:48) ( z ) : (cid:15) (cid:48) , z } by˜ η B ,s C (cid:15) (cid:48) ( z ) := (cid:40) √ π (cid:82) V × (( (cid:15) (cid:48) ) ,s C ) p B Large ( s/ z, w ) W ( dw, ds ) , if z ∈ B large and (cid:15) (cid:48) < s C , , otherwise, (90)where p B Large ( s/ z, w ) is the transition density for SBM truncated upon exiting the box B Large . Aderivation similar to (52) yields that with high probability we havemax C ∈C max B ∈B C ,z ∈ B large max (cid:15) (cid:48)
For B ∈ B i , there exists an event E B , prefast which is measurable with respect to thefield ˜ η B , such that P ( E B , prefast | V δ ) ≥ − O ( K − ) and ( E B , prefast ∩ E ) ⊆ { B is pre-fast } . Proof.
Let B small denote the box concentric with B , of half the side length. Let σ ∂ B small (respectively σ ∂ B large ) be the hitting time of ∂ B small (respectively, ∂ B large ) by the SBM. Let τ be the first hittingtime of ∂ B after σ ∂ B small . Define E = { σ ∂ B small ≤ σ ∂ B large , τ ≤ s i K − } . From standard properties ofthe SBM we have that that P ( E ) ≥ − and that P z ( X τ ∈ Λ | E ) ≥ − , (95)for any z ∈ ∂ B and Λ ⊆ ∂ B with 1-dimensional Lebesgue measure L (Λ) ≥ − s i /K . Write ˜ F for˜ F B . A straightforward computation yields that E ( E z ( ˜ F ( τ ) | E ) | V δ ) ≤ E z E ( ˜ F ( s i K − ) | V δ ) ≤ s i K − , where we used Lemma 3.13 for ˜ η B in the second inequality. Therefore, by Markov’s inequality, wesee that P ( P z ( ˜ F ( τ ) ≥ s i | E ) ≥ − | V δ ) ≤ O ( K − ) . Combining the preceding inequality with (95) and using the fact that P z ( ˜ F ( τ ) ≤ s i , X τ ∈ Λ , E ) ≥ P z ( E )( P z ( X τ ∈ Λ | E ) − P z ( ˜ F ( τ ) ≥ s i | E )) , we get that for any Λ ⊆ ∂ B with L (Λ) ≥ − s i /K P ( P z ( ˜ F ( τ ) ≤ s i , X τ ∈ Λ , E ) ≥ − | V δ ) ≥ − O ( K − ) . Combined with (94), this yields that P ( E z, fast | V δ ) ≥ − O ( K − ) and ( E z, fast ∩ E ) ⊆ { z is pre-fast } , (96)where E z, fast := { P z ( ˜ F ( τ ) ≤ s i , X τ ∈ Λ , E ) ≥ − } is measurable with respect to the field ˜ η B .Another application of Markov’s inequality concludes the proof of the lemma.34 roof of Lemma 4.1. In what follows, we work conditionally on V δ . Fix i . Recall that B i denotesthe partition of C i into K boxes of side length s C i /K , where K = 2 (cid:98) (log δ − ) . (cid:99) . Correspondingly, ∂ C i is partitioned into 4 K segments, whose collection is denoted by BS . For L ∈ BS , let B L denotethe unique box in B i containing L . Set BS i,A = { L ∈ BS : L ⊂ A } for all A ⊂ ∂ C i .For any Λ ⊆ Λ i with L (Λ) ≥ . L (Λ i ), we define L = L Λ := { L ∈ BS i, Λ i : L ( L ∩ Λ) ≥ − s i /K } , (97)and set L (cid:48) = L (cid:48) Λ = { L ∈ BS i, Λ i − : L is connected to (some segment in) L by a path of neighboring pre-fast boxes } . Let Λ (cid:48) = ∪ L ∈ L (cid:48) L , and introduce the event A = {L (Λ (cid:48) ) ≥ . L (Λ i − ) for any Λ ⊆ Λ i with L (Λ) ≥ . L (Λ i ) } . The event A ensures that any not-so-small subset Λ of Λ i is connected with a not-so-small subset(i.e. Λ (cid:48) ) of Λ i − by pre-fast boxes. The heart of the proof of the lemma consists of showing thefollowing statement: P ( A|V δ ) ≥ − e − √ log δ − . (98)We postpone the proof of (98) and complete the proof of the lemma, assuming its validity. TakeΛ (cid:48) prefast = ∪ L ∈ L (cid:48) { z ∈ L : z is pre-fast with respect to B L } . Note that on A , L (Λ (cid:48) prefast ) ≥ L (Λ (cid:48) ) − L ( ∪ L ∈ L (cid:48) { z ∈ L : z is not pre-fast with respect to B L } ) ≥ . L (Λ i − ) − (cid:88) L ∈ L (cid:48) − L ( ∂ B L ) ≥ . L (Λ i − ) , (99)where we have used the fact that B L is pre-fast for all L ∈ L (cid:48) . In addition, for each L ∈ L (cid:48) we denoteby B , . . . , B (cid:96) with (cid:96) ≤ K the sequence of pre-fast boxes in B i with from L to L . For all 1 ≤ j ≤ (cid:96) − i,j denote the collection of all pre-fast points with respect to B j +1 lying on the commonboundary of B j and B j +1 . We also set Λ i,(cid:96) = B (cid:96) ∩ Λ, which has 1-dimensional Lebesgue measurelarger than 10 − s i /K by (97). Note that L (Λ i,j ) ≥ s i / (2 K ) for each 1 ≤ j ≤ (cid:96) −
1. Consequently, L (Λ i,j ) ≥ − s i /K for all j , by the definition of pre-fast boxes and the construction of Λ i,j ’s.Define σ = 0 and recursively for 1 ≤ j ≤ (cid:96) , σ j = min { r ≥ σ j − : X r ∈ Λ i,j } . Applying (89) repeatedly and using the strong Markov property of SBM together with the definitionof K , we obtain that (86) holds for z ∈ Λ (cid:48) prefast , that is, Λ (cid:48) prefast ⊆ Λ (cid:48) i − , fast . Since L (Λ (cid:48) prefast ) ≥ . L (Λ i − ), this completes the proof of the lemma, except for the proof of (98), to which we turnnext. Indeed, we will check that P ( A c |V δ ) ≤ e − √ log δ − .Suppose that A does not occur. Then there exists a Λ such that L (Λ) ≥ . L (Λ i ) and moreover L ( ∪ L ∈ ˜ L L ) ≥ . L (Λ i − ), where ˜ L = BS i, Λ i − \ L (cid:48) . By the definition of L , L (Λ \ ∪ L ∈ L L ) ≤ − L (Λ i ), thus L ( ∪ L ∈ L L ) ≥ L (Λ) − − L (Λ i ) ≥ . L (Λ i ). Recalling (85), it follows that | L | , | ˜ L | ≥ . (cid:15) ∗ s i × K/s i ≥ (cid:98) K(cid:15) ∗ (cid:99) =: (cid:96) . Note that L is not connected with ˜ L by pre-fast boxes,by the defintion of L (cid:48) . It follows that on A c ,there exist B i, ⊆ BS i, Λ i and B i, ⊆ BS i, Λ i − with |B i, | , |B i, | = (cid:96) , thatare not connected by a sequence of neighboring pre-fast boxes in C i . (100)Provided with Lemma 4.2, the desired upper bound on P ( A c |V δ ) follows from a Peierls argumentconcerning very subcritical percolation with local dependencies. For completeness, we provide aproof. By planar duality there exist ( B ji, , B ji, ) ⊆ B ∂ C i for all 1 ≤ j ≤ r and some r ≤ (cid:96) such that(here B ∂ C i is the collection of boxes in B i which intersects with ∂ C i ) • For each j , there exists a sequence of ∗ -connected boxes B i,j, separate ⊆ B i which starts at B ji, and ends at B ji, (two boxes are ∗ -connected as long as their intersection is non-empty); • The union of B i,j, separate ’s separates B i, from B i, . • Each box in B i,j, separate for 1 ≤ j ≤ r is not pre-fast. • Each box in B i,j, separate for 1 ≤ j ≤ r is of (cid:96) ∞ -distance at most 4 |B i,j, separate | s i /K away fromsome B i,j, separate ∈ B i, Λ i ∪ B i, Λ i − , where B i,j, separate ’s are distinct from each other — this isbecause each ∗ -connected path (together with B C i ) is supposed to separate at least one boxin B i, Λ i ∪ B i, Λ i − which are not separated otherwise. • L := (cid:80) rj =1 L j ≥ (cid:96) , where L j = |B i,j, separate | .Therefore, when the total number of boxes is L , the number of valid choices for B i,j, separate ’s is atmost N L = (cid:96) (cid:88) r =1 (cid:88) (cid:80) rj =1 L j = L (cid:18) (cid:96)r (cid:19) r (cid:89) j =1 (4 L j ) L j (101)where (cid:0) (cid:96)r (cid:1) bounds the number of choices for B i,j, separate ’s, (4 L j ) bounds the number of choices for B ji, and B ji, , and 8 L j bounds the number of choices for the rest of B i,j, separate . A straightforwardcomputation then gives that N L ≤ C L for some constant C >
0. In addition, the number of choicesfor B i, and B i, is at most (cid:0) K(cid:96) (cid:1) . Furthermore, since we can choose at least L/
25 many boxesfrom ∪ j B i,j, separate whose (cid:96) ∞ -distance are at least 2 s i /K . Note that the construction of ˜ η B doesnot explore the white noise outside the spatial box B Large . By Lemmas 3.13 and 4.2, we see thatfor each such choice the probability for all these boxes in ∪ j B i,j, separate to be not prefast is at most( C (cid:48) K − ) L/ for some absolute constant C (cid:48) >
0. Summing over L ≥ (cid:96) , we see that the probabilityfor the existence of such B i, and B i, is bounded by (cid:18) K(cid:96) (cid:19) (cid:88) L ≥ (cid:96) N L ( C (cid:48) K − ) L/ ≤ (10 /(cid:15) ∗ ) (cid:96) (cid:88) L ≥ (cid:96) C L ( C (cid:48) K − ) L/ ≤ − (cid:96) for δ < δ where δ > K (cid:15) ∗ ≥ e √ log δ − inthe last inequality). Thus, P ( A c |V δ ) ≤ − (cid:96) . Since (cid:96) = (cid:98) K(cid:15) ∗ (cid:99) (cid:29) (cid:112) log δ − , this yields (98) andcompletes the proof of the lemma. 36he next lemma controls the behavior of the Liouville Brownian motion near v and u . Recallthe event E from (92). Recall the notation in the paragraph below (84), and the definitions of p fast and C δ , see (86) and (87), and recall that δ = t / (2 − χ + u,v )+ ι . Lemma 4.3.
Assume δ < δ . For any ι > small enough and β > fixed large enough, there existevents E u, fast (measurable with respect to { η s d δ (cid:48) ( x ) : δ (cid:48) < s d , x ∈ ( C d ) large } ) and E d, fast having ι -highprobability with respect to P ( ·|V δ ) , such that the following holds.(i) On E d, fast ∩ E , there exists Λ d, fast ⊆ Λ d − with L (Λ d, fast ) ≥ . L (Λ d − ) such that P z ( F ( σ v,β ) ≤ δ − ι C δ ) ≥ t β +1 for all z ∈ Λ d, fast , (102) where σ v,β is the hitting time of B ( v, ( t/ β ) .(ii) On E u, fast ∩ E , for any (possibly random, but measurable with respect to { η s i δ (cid:48) ( x ) : δ (cid:48) < s i , x ∈ ( C i ) large , i = 1 , . . . , d } ) Λ ,u ⊆ ∂ C with L (Λ ,u ) ≥ . L (Λ ) , we have P u ( F ( σ Λ ,u ) ≤ δ − ι C δ ) ≥ p fast , (103) where σ Λ ,u is the hitting time of Λ ,u .Proof. Let σ ∂ C d, large be the hitting time of ∂ C d, large by SBM, where C d, large is a box concentricwith C d but of doubled side length. Consider the field ˜ η C d , which equals ˜ η s d (cid:15) (cid:48) ( z ) for (cid:15) (cid:48) < s d and z ∈ C d, large , and vanishes for z (cid:54)∈ C d, large . Note that s z,δ ≥ s ˆ C d /(cid:15) ∗ ≥ s d for all z ∈ C d, large (recallDefinition 3.11 and (65)), where ˆ C d is the cell containing C d . Let ˜ F be the PCAF with respect to˜ η C d . We call z a good point if P z (cid:16) ˜ F ( s d ) ≤ s d δ − ι √ C δ | E (cid:17) ≥ , where E := { σ v,β ≤ s d ≤ σ ∂ C d, large } .Let E d, fast be the event that L (good points in Λ d − ) ≥ . L (Λ d − ), which has P ( ·|V δ )-probabilitylarger than 1 − δ ι by Markov’s inequality. On E d, fast ∩ E , any good point z ∈ Λ d − satisfies that P z ( F ( σ v,β ) ≤ δ − ι C δ ) ≥ P z ( E ) ≥ t β +1 (compare with (94)), thereby establishing (102).The proof of (103) follows a similar argument, noting that P u ( σ Λ ,u ≤ s ≤ σ ∂ C , large ) = Ω( (cid:15) ∗ ) ≥ p fast . We omit further details. Proof of (81) . It is enough to prove the claim for δ < δ , as this will determined t through therelation δ = t / (2 − χ + u,v )+ ι . Using Lemma 4.1, P ( C i is not fast for some i ) ≤ P ( E c ) + EP ( ∪ i E c C i | V δ ) . From the lemma, we conclude that the event that all C , . . . , C d are fast occurs with high probabilityon the event E . Similarly, by Lemma 4.3, (102) and (103) hold with ι -high probability on the event E . We work on the intersections of these events with E , which occurs with probability at least1 − e − (log δ − ) . , see (92). Note that with our choice of parameters we have for sufficiently small δ > δ · C δ · δ − χ + u,v − (2 − χ + u,v ) · ι ≤ t/ . (104)In addition, note that δ − ι C δ ≤ t/ ι small enough. (105)37efine σ = 0 and recursively σ i = min { r ≥ σ i − : X r ∈ Λ r, fast } for i = 1 , . . . , d . By our assumptionthat the C i ’s are fast (see (86) and (88)), (103) and the strong Markov property of the SBM, wesee, recalling (104) and (105), that P u (cid:16) F ( (cid:80) di =1 σ i ) ≤ t/ (cid:17) ≥ ( p fast ) d . Combined with (102), we obtain that P u ( | Y s − v | ≤ ( t/ β for some s ≤ t/ ≥ ( p fast ) d · t β +1 ≥ exp {− t − χ + u,v − χ + u,v − ι } , where in the last estimate we used that χ + u,v <
1. Combined with [25, Corollary 5.20] (with anappropriately chosen large β in part (i) of Lemma 4.3), this completes the proof of (81). In this section, we will provide an upper bound on the Liouville heat kernel based on the Liouvillegraph distance. For u, v ∈ V , we denote χ − u,v = lim inf δ → E log D γ,δ ( u, v )log δ − . Recalling Lemma 2.12, we see that 0 < χ − u,v <
1. We will show that there exists a finite randomvariable
C > t ∈ (0 , p γt ( u, v ) ≤ C exp (cid:110) − t − χ − u,v − χ − u,v + o (1) (cid:111) . (106)(As we discuss below, the restriction to t ∈ (0 ,
1] is possible because of Lemma 2.11.) In order toprove (106), the key is to show that there exists a small positive constant c = c γ,u,v > ι >
0, it holds with probability at least 1 − t c · ι that P u ( Y r ∈ B ( v, δ C mc ) for some r ≤ t ) ≤ exp (cid:110) − t − χ − u,v − χ − u,v + ι (cid:111) . (107)In analogy with the proof of (81), in order to show (107) we will show that for any cell in V δ , with not too small probability the Liouville Brownian motion will accumulate not too smallLiouville time when crossing it (here, we will choose δ ≈ t − χ − u,v ). Throughout, we continue to workon E , see (82), and recall the notation (cid:15) ∗ from (64) and C δ from (87). For a cell C ∈ V δ and z ∈ C ,we say that z is a slow point if P z ( F ( σ ∂ C ) ≥ δ /C δ ) ≥ α slow , (108)where α slow > γ , which is determined in Lemma 4.4 below. Wenote that a point can be both fast and slow according to our definition. We say that a cell C isslow if the (two-dimensional) Lebesgue measure of slow points in C is at least α slow s C . Lemma 4.4.
There exists a constant α slow > depending only on γ such that the following holds.For each C ∈ V δ , we have that P ( C is slow | V δ ) ≥ − e − α slow √ log δ − . roof. We set k = (cid:98) (cid:112) log δ − (cid:99) and K = 2 k . We remark that k, K are different from those used inthe course of the proof of the lower bound.Partition C into K many dyadic squares with side length s C /K , and denote by B C the collectionof these boxes. For B ∈ B C and z ∈ B , we say z is a very-slow point (with respect to the box B ) if P z ( F ( σ ∂ B large ) ≥ δ /C δ ) ≥ α slow , (109)where we recall that B large is defined to be the box concentric with B with doubled side length.Note that a point x away from ∂ C (more precisely, if (cid:107) x − ∂ C (cid:107) ∞ ≥ s C /K ) is slow if it is very slow.We will work with the field ˆ η B := { ˜ η B ,(cid:15) ∗ s C (cid:15) (cid:48) ( z ) : (cid:15) (cid:48) , z } , defined by replacing s C with (cid:15) ∗ s C in (90),i.e. ˜ η B ,(cid:15) ∗ s C (cid:15) (cid:48) ( z ) := (cid:40) √ π (cid:82) V × (( (cid:15) (cid:48) ) , ( (cid:15) ∗ s C ) ) p B large ( s/ z, w ) W ( dw, ds ) , if z ∈ B large and (cid:15) (cid:48) < (cid:15) ∗ s C , , otherwise.Analogously to E , we have that with high probability,max C ∈V δ , B ∈B C max z ∈ B large max (cid:15) (cid:48) <(cid:15) ∗ s C , log (cid:15) (cid:48) ∈ Z | ˆ η B ,(cid:15) ∗ s C (cid:15) (cid:48) ( z ) − η (cid:15) ∗ s C (cid:15) (cid:48) ( z ) | = O ( (cid:112) log δ − ) . (110)Set now E (cid:48) = E δ,α ∗ ∩ { (52) holds } ∩ { the event in (110) holds } , (111)and note that E (cid:48) occurs with high probability. Let ˜ F = ˜ F B be defined as in (93) with ˜ η B replacedby ˆ η B . We have on E (cid:48) the following estimate for the SBM X · started at z ∈ B and any stoppingtime τ so that X r ∈ B large for all 0 ≤ r ≤ τ : F ( τ ) ≥ ˜ F ( τ ) δ s − C exp { ( − log δ − ) . } . (112)We will restrict our discussion to B at least at distance 4 s C /K away from ∂ C , for the reason thatfor such B , the white noise that determines ˆ η B has not explored in constructing V δ . (113)For z ∈ B , we claim that there is an event E z, slow , measurable with respect to the field ˆ η B , suchthat P ( E z, slow | V δ ) ≥ α (cid:63) and ( E z, slow ∩ E (cid:48) ) ⊆ { z is very-slow } , (114)where α (cid:63) > γ . We will first complete the proof of the lemmaassuming (114). We take a sub-collection of boxes B ∗ ⊆ B C such that • All boxes in B ∗ are at least 4 s C /K distance away from ∂ C ; • The pairwise distance of two boxes in B ∗ is at least 4 s C /K ; • |B ∗ | ≥ − K .For each B ∈ B ∗ , let L slow ( B ) be the Lebesgue measure of very-slow points in B . Then by (114) andour assumption on B ∗ , we see that, on E (cid:48) , we have {L slow ( B ) : B ∈ B ∗ } dominates a sequence of39.i.d. random variables {L (cid:48) slow ( B ) : B ∈ B ∗ } such that E L (cid:48) slow ( B ) ≥ α (cid:63) s C K − and L (cid:48) slow ( B ) ≤ s C K − .Therefore, P ( L (cid:48) slow ( B ) ≥ α (cid:63) s C K − / ≥ α (cid:63) /
2. We deduce that P ( (cid:88) B ∈B ∗ {L (cid:48) slow ( B ) ≥ s C K − α (cid:63) / } ≥ − α (cid:63) K ) ≥ − e − − α (cid:63) K , completing the proof of the lemma, except for the proof of (114), to which we turn next.Let t C = s C K − . We will show below that for all z ∈ B ∈ B C , P ( P z ( ˜ F ( t C ) ≥ s C K − ) ≥ α | V δ ) ≥ α , where α > . (115)Since σ ∂ B large ≥ t C occurs with probability tending to 1 as δ →
0, we can deduce from (115) thatfor sufficiently small δ , P ( P z ( σ ∂ B large ≥ t C , ˜ F ( t C ) ≥ s C K − ) ≥ α / | V δ ) ≥ α . Combined with (112), this implies (114) with an appropriate choice of the absolute constant α slow >
0. We finally turn to the proof of (115). Fix 1 < p < /γ . We follow the arguments in [20,Appendix B] to show that E E z ( ˜ F ( t C )) p ≤ O ( t p C ). (The proof in [20] applies to any log-correlatedGaussian field, and thus carries over to the field ˆ η B with no essential change.) With the momentestimate at hand, we can apply H¨older’s inequality and get that for any κ > E E z ( ˜ F ( t C )) ≤ κt C + E E z ( ˜ F ( t C )1 { ˜ F ( t C ) ≥ κt C } ) ≤ κt C + O (cid:16) t C (cid:16) E (cid:0) P z ( ˜ F ( t C ) ≥ κt C ) (cid:1)(cid:17) − /p (cid:17) . Combined with the fact that E E z ( ˜ F ( t C )) = t C and an appropriate choice of κ > γ ), we deduce that E (cid:0) P z ( ˜ F ( t C ) ≥ κt C ) (cid:1) is lower bounded by a positive constantdepending only on γ . Combined with (113), this then implies (115), as desired. Proof of (107) . Fix an arbitrarily small ι >
0. Let δ = t − χ − u,v − ι . By Propositions 3.2 and 3.17, wesee that with ( c · ι )-high probability for some d ≥ δ − χ − u,v + β · ι/ every sequence of neighboring cellsin V δ connecting u to v contains at least d cells, where β = (2 − χ − u,v ) . On E (cid:48) from (111), all thecells have side length at least δ C mc , and therefore the number of neighboring cells connecting u to B ( v, δ C mc ) is at least d −
2. Define σ = 0 and for i ≥ σ i = { r ≥ σ i − : X r ∈ ∂ C X σi − , large } , where we recall that C z, large denotes a box concentric with C z,δ , the cell containing z , with doubledside length. On E (cid:48) , the event E δ,α ∗ from (39) holds, and therefore in order to hit B ( v, δ C mc ), theLiouville Brownian motion has to go through d − C large from C (forsome C ∈ V δ ) it crosses at most δ − β · ι/ many cells. Thus, { Y r ∈ B ( v, δ C mc ) for some r ≤ t } ⊆ { dδ β · ι (cid:88) i =1 ( F ( σ i ) − F ( σ i − )) ≤ t } . (116)By Lemma 4.4, the event that all cells are slow has high probability. On this event, P X σi − ( F ( σ i ) − F ( σ i − ) ≥ δ /C δ ) ≥ P X σi − ( X · hits a slow point in C X σi − before σ i ) α slow , α (cid:48) slow > γ . By the strong Markovproperty of the SBM, we conclude that ( F ( σ i ) − F ( σ i − )) (cid:48) s dominates a sequence of i.i.d. non-negative random variables which take value δ /C δ with probability α (cid:48) slow >
0. At this point, asimple large deviation estimates yields that for sufficiently small t , P u ( dδ β · ι (cid:88) i =1 (cid:0) F ( σ i ) − F ( σ i − ) (cid:1) ≤ t ) ≤ e − Ω(1) dδ β · ι ≤ e − dδ · ι ≤ exp {− t − χ − u,v − χ − u,v +4 · ι } , where the three inequallities hold respectively because the exponent of tδ /C δ (with respect to 1 /t )is strictly less than that of dδ β · ι , because β ≤
2, and because χ − u,v <
1. Combined with (116) andthe fact that we considered a high probability event, this completes the proof of (107).
Proof of (106) . Since the event { Y r ∈ B ( v, δ C mc ) for some r ≤ t } is increasing in t , we can applya union bound over all t of the form t = 2 − j , use (107) and the Borel-Cantelli lemma to concludethat for any ι >
0, there exists a random variable
C > t > P u ( Y r ∈ B ( v, δ C mc ) for some r ≤ t ) ≤ C exp (cid:110) − C − t − χ − u,v − χ − u,v + c · ι (cid:111) . Applying Lemma 2.11 with α = C mc − χ − u,v , α = − χ − u,v − χ − u,v + c · ι and a corresponding choice of α ,this completes the proof of (106). In this section, we will show that the exponent for the Liouville graph distance exists, and that theexponent does not depend on the choice of starting or ending points. Recall that V ξ = { v ∈ V : | v − ∂ V | ≥ ξ } . Proposition 5.1.
For any γ ∈ (0 , , there exists χ = χ ( γ ) such that for any u, v ∈ V \ ∂ V , lim δ → E log D γ,δ ( u, v )log δ − = χ . Furthermore, the χ ( γ ) here is the same as that in Lemma 5.3. Our proof of Proposition 5.1 is based on subadditivity; however, some preparations are neededbefore subadditivity can be invoked. We begin by setting a few notations. Let ¯ V (respectively, ˜ V )be a box concentric with V and of side length 1 /
20 (respectively, 1 / u, v ∈ V and λ > V u,λ denote the box centered at u and of side length λ , let ˜ V u,v denote the translated androtated box centered at u + v , of side length 2 | u − v | , and with two sides parallel to the line segmentjoining u and v . In particular, for all u, v ∈ ¯ V we have ˜ V u,v ⊆ ˜ V . Furthermore, for all v ∈ ˜ V inthe definition for η ˜ δδ ( v ) as in (13), the truncation for the transition kernel upon exiting V becomesredundant since B ( v, − s | log s − | ∧ − ) ⊆ V for all s >
0. Therefore, for u, v, u (cid:48) , v (cid:48) ∈ ¯ V with | u − v | = | u (cid:48) − v (cid:48) | , denoting by θ an isometry which maps from ˜ V u,v to ˜ V u (cid:48) ,v (cid:48) , we have that { η ˜ δδ ( x ) : x ∈ ˜ V u,v } law = { η ˜ δδ ( θx ) : x ∈ ˜ V u,v } for all 0 < δ < ˜ δ ≤ ∞ . (117)41or u, v ∈ V and δ , we define D Aγ,δ ( u, v ) to be the minimal number of Euclidean balls with rationalcenter and radius, contained in A with LQG measure at most δ , whose union contains a pathfrom u to v . Denote ˜ D γ,δ ( u, v ) = D ˜ V u,v γ,δ ( u, v ) and ¯ D x,λγ,δ ( u, v ) = D V x,λ γ,δ ( u, v ) for brevity. We alsodefine the tilde-approximate Liouville graph distance, similar to the approximate Liouville graphdistance. That is, we repeatedly and dyadically partition ˜ V u,v until all cells have approximateLiouville quantum gravity measure (as defined in (35)) at most δ , and we denote by V δ,u,v theresulting partition. Let ˜ D (cid:48) γ,δ ( u, v ) be the graph distance between the two cells containing u and v in V δ,u,v (note that, of course, all cells are contained in ˜ V u,v ).By (117), we see that for u, v ∈ ¯ V ,the law of ˜ D (cid:48) γ,δ ( u, v ) or ˜ D γ,δ,η ( u, v ) depends on u, v only through | u − v | . (118)The translation invariance property in (118) will be useful below when setting up the sub-additiveargument. Remark 5.2.
One can verify that our proofs for Propositions 3.2, 3.17, Lemmas 3.5, 3.8, 3.10 andCorollary 3.9 extend automatically to the tilde-Liouville graph distance and the approximate tilde-Liouville graph distance. As a result, in this section we often apply these results to the tilde-versionof these statements (formally, replacing D by ˜ D and replacing D (cid:48) by ˜ D (cid:48) ).The next two lemmas are the key ingredients for the proof of Proposition 5.1 . Lemma 5.3.
For any γ ∈ (0 , , there exists χ = χ ( γ ) such that for any u, v ∈ ¯ V , lim δ → E log ˜ D γ,δ,η ( u, v )log δ − = lim δ → E log ˜ D (cid:48) γ,δ ( u, v )log δ − = χ . Lemma 5.4.
Let χ be as in Lemma 5.3. For any u ∈ ¯ V , λ = , lim δ → E log(min x ∈ ∂ V u,λ ¯ D u, λγ,δ,η ( u, x ))log δ − = χ . Proof of Proposition 5.1 (assuming Lemmas 5.3 and 5.4).
We first prove that for an arbitrarilysmall ι > E log D γ,δ,η ( u, v ) ≤ ( χ + ι ) log δ − as δ → . (119)To this end, let y i = u + il ( v − u ), i = 0 , . . . , l with l = min { (cid:96) ∈ Z : | u − v | (cid:96) ≤ min { √ ξ, }} ,where ξ = min {| u − ∂ V | ∞ , | v − ∂ V | ∞ } . Pick ¯ u, ¯ v ∈ ¯ V with | ¯ u − ¯ v | ∞ = 1 /
20. Applying Lemma 2.9to each pair ( ˜ V ¯ u, ¯ v , ˜ V y i ,y i +1 ) so that ζ (1) has the same law as the η -process on ˜ V ¯ u, ¯ v and ζ (2) has thesame law as the η -process on ˜ V y i ,y i +1 , as well as using Lemma 3.8 (note that we can choose someconstant b = b ( u, v ) as in the assumption of Lemma 3.8), we see that with high probability˜ D γ,δ,ζ (2) ( y i , y i +1 ) ≤ ˜ D γ,δe − √ log δ − ,ζ (1) (¯ u, ¯ v ) . (120)Combined with Lemmas 5.3, 3.10, Corollary 3.9 and Proposition 3.17, we see that with highprobability, ˜ D γ,δ,η ( y i , y i +1 ) ≤ δ − χ − ι for i = 1 , . . . , l , (121)42mplying that D γ,δ,η ( u, v ) ≤ l × δ − χ − ι by triangle inequality. This yields (119) (recall Proposi-tion 3.17).Next, we prove the lower bound, i.e., we prove that for arbitrarily small ι > E log D γ,δ,η ( u, v ) ≥ ( χ − ι ) log δ − as δ → . (122)To this end, let λ = min { √ ξ, √ | u − v | , } , and we see that v / ∈ V u,λ ⊆ V ξ . Similarly to thederivation of (120), we apply Lemma 2.9 to the pair ( V ¯ u, , V u,λ ), combine with Lemma 5.4, andget that with high probability,min x ∈ ∂ V u,λ ¯ D u, λγ,δ,ζ (2) ( u, x ) ≥ min x ∈ ∂ V ¯ u, ¯ D ¯ u, γ,δe √ log δ − ,ζ (1) (¯ u, x ) ≥ ( χ − ι ) log δ − , where ζ (1) has the same law as the η -process on V ¯ u, , and ζ (2) has the same law as the η -processon V u,λ . With high probability, balls intersecting both ∂ V u,λ and ∂ V u, λ have LQG measure largerthan 2 δ , implying min x ∈ ∂ V u,λ D γ,δ,η ( u, x ) = min x ∈ ∂ V u,λ ¯ D u, λγ,δ,η ( u, x ) . (123)It follows that E min x ∈ ∂ ¯ V u,λ D γ,δ,η ( u, x ) ≥ ( χ − ι ) log δ − . (124)Since D γ,δ,η ( u, v ) ≥ min x ∈ ∂ V u,λ D γ,δ,η ( u, x ) for v / ∈ V u,λ , we get (122) as required.Combining (119), (122) and Lemma 3.10 we complete the proof of the proposition.Next, we prove Lemma 5.3, employing a sub-additive argument. As in the proof of (81),Lemma 3.13 plays a crucial role. Proof of Lemma 5.3.
For u, v ∈ ¯ V , let w i = u + i | u − v | so that ˜ V x,y ⊆ ˜ V u,v for all x, y ∈ ˜ V w i − ,w i , i = 1 , . . . , w i and w i +1 will be allcontained in ˜ V u,v ). Fix δ > Definition 5.5 ( E (cid:63)δ,α ∗ ,u,v ) . Let E (cid:63)δ,α ∗ ,u,v denote the following event: there exists a good sequenceas in Definition 3.11 of neighboring dyadic boxes C = C , . . . , C d , contained in ∪ i =1 ˜ V w i − ,w i andmeasurable with respect to F ∗ = σ ( ∪ i =1 V δ,w i − ,w i ) , joining u to v , such that • d ≤ e (log δ − ) . (cid:80) i =1 d i with d i = ˜ D (cid:48) γ,δ ( w i − , w i ) ; • Each C i satisfies M γ,s C i ( C i ) ≤ δ e O ((log δ − ) . ) ; • The law of { η s C i δ (cid:48) ( x ) : δ (cid:48) < s C i , x ∈ ( C i ) large , C i ∈ C} conditioned on F ∗ coincides with itsunconditional version. Note that here as in Section 4.1 we have abused the notation by denoting by C i a dyadic boxwhich is not necessarily a cell. The abuse of notation is justified by the fact that M γ,s C i ( C i ) ≤ δ e O ((log δ − ) . ) and thus the C i ’s will essentially play the role of cells.43ollowing the discussions after (83) (with a crucial application of Lemma 3.13), we see that P ( E (cid:63)δ,α ∗ ,u,v ) ≥ − e − (log δ − ) . . By Proposition 3.2, Lemmas 2.9, 3.8, 3.10, Corollary 3.9 and (120),with high probability, d i ≤ e (log δ − ) . ˜ D ( i ) γ,δ,η ( u, v ) ≤ e (log δ − ) . exp { E log ˜ D γ,δ,η ( u, v ) } , where ˜ D ( i ) γ,δ,η ( u, v ) is a copy of ˜ D γ,δ,η ( u, v ) and is coupled with d i . Thus, P ( D ) ≥ − e − (log δ − ) . , where D := { log d ≤ (log δ − ) . + E log ˜ D γ,δ,η ( u, v ) } . (125)In order to set a sub-additivity argument, we need to further relate d to ˜ D γ,δ ˜ δ,η ( u, v ) for ˜ δ > δ .To this end, we let x i ∈ Λ i = ∂ C i ∩ ∂ C i +1 for each i = 1 , . . . , d −
1, to be chosen later depending onthe GFF (for convenience we write x = u and x d = v ). By the triangle inequality, we see that˜ D γ,δ ˜ δ,η ( u, v ) ≤ d − (cid:88) i =0 D ˜ V u,v γ,δ ˜ δ,η ( x i , x i +1 ) . (126)We claim that with probability at least 1 − e − c (log 1 /δ ) . , there exists a choice of x , . . . , x d − suchthat for all 0 ≤ i ≤ d − D ˜ V u,v γ,δ ˜ δ,η ( x i , x i +1 ) ≤ E log ˜ D γ, ˜ δ,η ( u, v ) + 4(log δ − ) . . (127)Assuming (127), we can complete the proof of the lemma, as follows. Denote the event in (127) by D and let D = D ∩ D . We obtain from (125), (126) and (127) that E ( D log ˜ D γ,δ ˜ δ,η ( u, v )) ≤ E log ˜ D γ,δ,η ( u, v ) + E log ˜ D γ, ˜ δ,η ( u, v ) + 5 × (log δ − ) . . (128)On the other hand, using an analogue of (38), we have by an application of Jensen’s inequality that E ( D c log ˜ D γ,δ ˜ δ,η ( u, v )) ≤ (log δ − ) e − (log δ − ) . . Setting χ δ = E log ˜ D γ,δ,η ( u,v )log δ − and combining the last display with (128), we obtain χ δ ˜ δ ≤ log δ − log δ − + log ˜ δ − χ δ + log ˜ δ − log δ − + log ˜ δ − χ ˜ δ + (log δ − ) − . . Applying [22] (see also [11, Lemma 6.4.10]), this yields that χ δ converges to some constant χ as δ → δ k = 2 − k , and then by continuity the convergence extends to arbitrary δ →
0. By Proposition 3.2, Lemmas 2.9, 3.8, 3.10 and Corollary 3.9, χ does not depend on u, v .Combined with Corollary 3.3 and Lemma 3.10, this yields Lemma 5.3.It remains to prove (127). The proof follows the proof strategy for (81). Set k = (cid:98) (log δ − ) . (cid:99) and K = 2 k . Partition C i into K many dyadic squares with side length s i /K , and we denote thecollection of such squares as B i , where s i = s C i . For each B ∈ B i , we say B is open if for any Λ ⊆ ∂ B with L (Λ) ≥ − s i /K there exists Λ (cid:48) ⊆ ∂ B with L (Λ (cid:48) ) ≥ (1 − − ) s i /K such thatmin z ∈ Λ log ˜ D γ,δ ˜ δ,η ( z, z (cid:48) ) ≤ E log ˜ D γ, ˜ δ,η ( u, v ) + (log δ − ) . , for each z (cid:48) ∈ Λ (cid:48) . (cid:15) ∗ from (64). Let ˇ η B be defined as in (90) with B large and B Large respectivelyreplaced by B ∗ = { x : (cid:107) x − ∂B (cid:107) ∞ ≤ s i /K } and B ∗∗ = { x : (cid:107) x − ∂B (cid:107) ∞ ≤ s i /K } , i.e.ˇ η B ,s i (cid:15) (cid:48) ( z ) := (cid:40) √ π (cid:82) V × (( (cid:15) (cid:48) ) ,s i ) p B ∗∗ ( s/ , z, w ) W ( dw, ds ) , if z ∈ B ∗ and (cid:15) (cid:48) < s i , , otherwise.Similarly to (91), we havemax C i ∈C max B ∈ B i ,z ∈ B ∗ max (cid:15) (cid:48) θ (cid:48) such that θ maps ˜ V z,z (cid:48) to ˜ V u,v (we see that a is of the same order as s − i K and so a − ≤ s i ).While such desired identity in law does not hold precisely, we claim that there exists a coupling of { ˇ η B ,s i (cid:15) (cid:48) ( x ) : (cid:15) (cid:48) < s i , x ∈ ˜ V z,z (cid:48) } and { η a(cid:15) (cid:48) ( θx ) : (cid:15) (cid:48) < s i , x ∈ ˜ V z,z (cid:48) } such that with high probability withrespect to P ( · | F ∗ ) max n ≥ , − n ≤ a − max x ∈ ˜ V z,z (cid:48) | ˇ η B ,s i − n ( x ) − η a − n ( θx ) | ≤ (log δ − ) . . (131)We postpone the proof of (131) and proceed with the proof of (129). Since (by a straightforwardcomputation) | Var(ˇ η B ,s i − n ( x )) − Var( η a − n ( θx )) | = O (1)(log δ − ) . for all x ∈ ˜ V z,z (cid:48) and 2 − n ≤ a − ,we see that on the event that (130) and (131) hold we have that M ˇ η B γ ( A ) ≤ exp { (log δ − ) − . } a − M ηγ ( θA ) ≤ exp { (log δ − ) − . } s i M ηγ ( θA ) , recalling a − ≤ s i . Combined with (130), it follows that˜ D γ,δ ˜ δ,η ( z, z (cid:48) ) ≤ ˜ D γ, ˜ δ exp {− (log δ − ) . } ,η ( u, v ) . We now combine the preceding inequality with Corollary 3.9 and Proposition 3.17, and deduce that P (log ˜ D γ,δ ˜ δ,η ( z, z (cid:48) ) ≥ E log ˜ D γ, ˜ δ,η ( u, v ) + (log δ − ) . | F ∗ ) ≤ O ( K − ) . z, far = { z (cid:48) ∈ ∂ B : log ˜ D γ,δ ˜ δ,η ( z, z (cid:48) ) ≥ E log ˜ D γ, ˜ δ,η ( u, v ) + (log δ − ) . } . The precedinginequality implies that P (cid:0) L (Λ z, far ) ≥ K − L ( ∂ B ) | F ∗ (cid:1) = O ( K − ) for each z ∈ ∂ B . Therefore, we get that P (cid:0) L ( { z ∈ ∂ B : L (Λ z, far ) ≥ K − L ( ∂ B ) } ) ≥ K − L ( ∂ B ) (cid:12)(cid:12) F ∗ (cid:1) = O ( K − ) . This implies that (129) holds (up to the proof of (131), which is still postponed).Having established (129), we proceed with the percolation argument. We say C i is desirable iffor any Λ i, end ⊆ Λ i with L (Λ i, end ) ≥ . L (Λ i ) (here it is useful to recall (85)), there existsΛ i, start = Λ i, start (Λ i, end ) ⊆ Λ i − with L (Λ i, start ) ≥ . L (Λ i − ) (132)such that the following holds for each x ∈ Λ i, start :min x (cid:48) ∈ Λ i, end log ˜ D ˜ V u,v γ,δ ˜ δ,η ( x, x (cid:48) ) ≤ E log ˜ D γ, ˜ δ,η ( u, v ) + 2(log δ − ) . . (133)In words, C i is desirable if any not-so-small subset of Λ i is connected with a not-so-small subsetof Λ i − by open boxes. Similar to (98), we obtain that each cell C i is desirable with probability1 − e − Ω(2 √ log δ − ) and thus a union bound verifies that all cells C , . . . , C d − are desirable with highprobability.We also need to consider the cells containing u and v . Consider C = C δ,u . Using a similarbut simpler argument, we can show that with probability tending to 1 there exists Λ u ⊆ Λ with L (Λ u ) ≥ . L (Λ ) such that for x ∈ Λ u we have log ˜ D ˜ V u,v γ,δ ˜ δ,η ( u, x ) ≤ log E ˜ D γ, ˜ δ,η ( u, v ) +(log δ − ) . . When this occurs, we say that u is desirable . As before, with high probability, wehave that v is desirable, i.e., there exists Λ v ⊆ Λ d − with L (Λ v ) ≥ . L (Λ d − ) such that for each x ∈ Λ v we have log ˜ D ˜ V u,v γ,δ ˜ δ,η ( v, x ) ≤ log E ˜ D γ, ˜ δ,η ( u, v ) + (log δ − ) . .We now work on the event that u, v are desirable and that C , . . . , C d − are desirable, and wedescribe in what follows how to choose x i ∈ Λ i so that (127) holds. We let Λ ∗ d − = Λ v and for i = d − , . . . ∗ i = Λ i +1 , start (Λ ∗ i +1 ) (where the set Λ i +1 , start ( · ) is defined asin (132)). Therefore, we see that Λ ∗ i ⊆ Λ i and L (Λ ∗ i ) ≥ . L (Λ i ). Next, we set x = u andsequentially set for i = 1 , . . . , d − x i = arg min x (cid:48) ∈ Λ ∗ i ˜ D ˜ V u,v γ,δ ˜ δ,η ( x i − , x (cid:48) ) . It remains to verify (127) for our choices of x i ’s. Since Λ u ∩ Λ (cid:54) = ∅ (this comes from the lowerbounds on their Lebesgue measures), we see (127) holds for i = 0. By (133), (127) holds for1 ≤ i ≤ d −
2. Finally, (127) holds for i = d − ∗ d − = Λ v .We finally return to the proof of (131), which is similar to that of Lemma 2.9. Recall that θ ( x ) = aθ (cid:48) ( x ) for appropriate a > θ (cid:48) is a bijective mapping from ˜ V z,z (cid:48) to ˜ V u,v .Thus, a is of the same order as s − i K and so a − ≤ s i . Recall the definition of ˆ h -process as in (23).By an argument similar to that in the proof of (25) and (26), we have that with high probability,max x ∈ ˜ V z,z (cid:48) max n ≥ , − n ≤ (cid:15) ∗ s ∗ i | ˆ h s i − n ( x ) − ˇ η B ,s i − n ( x ) | + max x ∈ ˜ V z,z (cid:48) max n ≥ ,a − n ≤ | ˆ h a − n ( θx ) − η a − n ( θx ) | ≤ (log δ − ) . . (134)46ext we need to control ˆ h s i − n ( x ) − ˆ h a − − n ( x ) = ˆ h s i a − ( x ). Let C be a maximal collection of points in˜ V z,z (cid:48) such that the pairwise distance is at least a − . Then, | C | ≤ O ( a ). By (24) and Lemma 2.3,we have that E max y : | y − x |≤ a − | ˆ h s i a − ( x ) − ˆ h s i a − ( y ) | = O (1), for all x ∈ C . Thus, by Lemma 2.2, wehave that with high probability,max x ∈ C max y : | y − x |≤ a − | ˆ h s i a − ( x ) − ˆ h s i a − ( y ) | ≤ (log δ − ) . . In addition, since Var(ˆ h s i a − ( x )) ≤ O (1) log δ − for all x ∈ C , a union bound gives that with highprobability max x ∈ C | ˆ h s i a − ( x ) | ≤ (log δ − ) . . Altogether, this gives that with high probabilitymax x ∈ ˜ V z,z (cid:48) | ˆ h s i a − ( x ) | ≤ δ − ) . . Combined with (134), we have that with high probabilitymax x ∈ ˜ V z,z (cid:48) max n ≥ , − n ≤ a − | ˆ h a − − n ( x ) − ˇ η B ,s i − n ( x ) | + max x ∈ ˜ V z,z (cid:48) max n ≥ ,a − n ≤ | ˆ h a − n ( θx ) − η a − n ( θx ) | ≤ (log δ − ) . . Combined with the translation invariance and scaling invariance property of ˆ h -process, we finallyconclude the proof of (131).Finally, we prove Lemma 5.4, where we will crucially used Proposition 3.17 and Lemma 5.3. 𝑢 𝑣 𝐿𝕍% 𝕍% & 𝕍% ( A zooming in picture of the rectangleThe crossing through a rectangle is formed by a constant (the constant depends on the aspect ratio of the rectangle, which in turn can be chosen as say 4) number of point to point geodesics between the red points in the above picture.
Figure 3: On the left, the big box is V and the inside is an illustration of how we join u and v usinggeodesics from u , v to L as well as an annulus enclosing L . On the right is an illustration for thecrossing in the small rectangle. Proof of Lemma 5.4.
Fix an arbitrarily small 0 < ι < C Mc /
6. Let u, v be the left bottom and rightbottom corners of ¯ V , respectively (such choice of u, v is somewhat arbitrary). By Lemma 5.3 thereexists δ depending on ( γ, ι ) such that for all δ ≤ δ ( χ − ι/
10) log δ − ≤ E log ˜ D γ,δ,η ( u, v ) ≤ ( χ + ι/
10) log δ − . (135)47ecall λ = . We denote ¯ V u = V u,λ and ¯ D u, λγ,δ,η by ¯ D γ,δ,η for brevity. We claim that for any linesegment L δ ⊆ ∂ ¯ V u with length in [ δ ι / , δ ι ], we have E log min x ∈ L δ ¯ D γ,δ,η ( u, x ) ≥ ( χ − ι ) log δ − . (136)Suppose (136) does not hold. We assume without loss (by symmetry) that there exists an L δ onthe right vertical boundary of ¯ V u so that (136) fails. Then, we give an upper bound on the distancebetween u and v by gluing the geodesics from u to L δ , v to L δ as well as four short crossings throughfour rectangles (with dimension 10 | L δ | × | L δ | ) which altogether form a contour enclosing L δ (seeFigure 3 for an geometric illustration) — we remark that each of the four rectangle crossings canbe formed by a constant number of point to point geodesics thanks to the restriction to ˜ V x,y inthe definition of ˜ D γ,δ,η ( x, y ). With high probability, the balls intersecting both ∂ V u,λ and ∂ V u, λ (respectively, ∂ V v,λ and ∂ V v, λ ) have LQG measure larger than 2 δ (and thus similar equalities to(123) hold). On this event, one has˜ D γ,δ,η ( u, v ) ≤ min x ∈ L δ ¯ D γ,δ,η ( u, x ) + min x ∈ L δ ¯ D v, λγ,δ,η ( v, x ) + (cid:88) ( x,y ) ˜ D γ,δ,η ( x, y ) , where in the third term on the right hand side, the sum is over all pairs of neighboring red pointson the right hand side of Figure 3 (for each such pair ( x, y ) we have | x − y | ≤ | L δ | ). Thus by(135) and a similar scaling argument as in the proof of (129) we have that with probability tendingto 1 as δ → D γ,δ,η ( x, y ) ≤ δ − χ + ι for all such ( x, y ) . Combined with our assumption that (136) fails for L δ , we then deduce that ˜ D γ,δ,η ( u, v ) ≤ δ − χ + ι/ with probability tending to 1 as δ →
0, contradicting with (135) and Proposition 3.17. Thus, wehave shown that (136) holds.Next, note that min x ∈ ∂ ¯ V u ¯ D γ,δ,η ( u, x ) = min L δ min x ∈ L δ ¯ D γ,δ,η ( u, x ) , where the minimization is over 4 δ − ι many disjoint segments L δ of length δ ι . Combined withProposition 3.17 ( note that { ( u, L δ ) } forms a sequence of admissible pairs as required for applyingProposition 3.17), this implies that E log( min x ∈ ∂ ¯ V u ¯ D γ,δ,η ( u, x )) ≥ ( χ − ι − Cι / ) log δ − for some constant C >
0. Since we can choose ι >
In this appendix, we record, for use in subsequent work, a few lemmas that can be readily deducedfrom the techniques employed in this paper; these lemmas are not used in the paper. Let λ = as in Lemma 5.4. Denote ¯ V u = V u,λ and ¯ V u,α = V u,αλ for α ∈ (0 , Lemma 6.1.
Fix α ∈ (0 , . Let χ be as in Lemma 5.3. Then, for any u ∈ ¯ V , lim δ → E log(min x ∈ ∂ ¯ V u,α ,y ∈ ∂ ¯ V u D γ,δ ( x, y ))log δ − = lim δ → E log(min x ∈ ∂ ¯ V u,α ,y ∈ ∂ ¯ V u D γ,δ,η ( x, y ))log δ − = χ . (137)48 roof. The first equality holds due to Lemma 3.10 and the main task is to prove the second equality.By Lemma 5.4 and a similar derivation to (124), we get that for any κ > v ∈ VE log( min y ∈ ∂ V v,κ D γ,δ,η ( v, y )) = ( χ + o (1)) log δ − . (138)Thus it suffices to prove a lower bound in (137). The proof is similar to that of Lemma 5.4.By Proposition 3.17, it suffices to show that for any fixed ι > L δ ⊆ ∂ ¯ V u,α with length in [ δ ι / , δ ι ] we have E log( min x ∈ L δ ,y ∈ ∂ ¯ V u D γ,δ,η ( x, y )) ≥ ( χ − ι ) log δ − . Suppose the preceding statement fails for some L δ . Let v L δ be an arbitrary point on L δ . As shownin Figure 3 employed in the proof of Lemma 5.4, we can construct four short crossings throughfour rectangles (with dimension 10 | L δ | × | L δ | ) which altogether form a contour enclosing L δ .Consequently, the union of these short crossings, the geodesic between L δ and ∂ ¯ V u , as well as thegeodesic between v L δ and ∂ ¯ V u contains a path between v L δ and ∂ ¯ V u . Therefore, by the sameargument as in Lemma 5.4, we get that E log( min y ∈ ∂ ¯ V u D γ,δ,η ( v L δ , y )) ≤ ( χ − ι ) log δ − . This contradicts with (138). Thus, we complete the proof of the lemma by contradiction.Fix ξ > B , we denote by 2 B a Euclideanball concentric with B , whose radius is double that of B . For δ > u, v ∈ V ξ , we define a variation of Liouville graph distance D ( ) γ,δ,ξ ( u, v ) to be the minimal d such thatthere exist Euclidean balls B , . . . , B d ⊆ V ξ with rational centers and M γ (2 B i ) ≤ δ for 1 ≤ i ≤ d ,whose union contains a path from u to v .For an Euclidean ball B with radius r centered at z , we define its circle-average-approximate-LQG measure by M ◦ γ ( B ) = r γ / e γh r ( z ) , compare with (27). For δ > u, v ∈ V ξ , we define another variation of Liouville graph distance D ◦ γ,δ,ξ ( u, v ) to be the minimal d such that there exist Euclidean balls B , . . . , B d ⊆ V ξ with rational centers and M ◦ γ ( B i ) ≤ δ for1 ≤ i ≤ d , whose union contains a path from u to v .We define D (cid:48) γ,δ,ξ ( x, y ) to be a version of the approximate Liouville graph distance where werestrict to cells in V ξ . One can verify that our proofs for Lemmas 3.5, 3.8, 3.10 and Corollary 3.9as well as Proposition 3.17 extend automatically to D (cid:48) γ,δ,ξ . Recall C Mc as specified in Lemma 3.1. Proposition 6.2.
For any fixed < ξ < C Mc / there exists a constant c = c ( γ, ξ ) so that for anyfixed ι > and any sequence of ξ -admissible pairs ( A δ , B δ ) , min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) · δ ι ≤ min x ∈ A δ ,y ∈ B δ D ( ) γ,δ,ξ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D γ,δ ( x, y ) · δ − ι , with ( c · ι ) -high probability. The preceding statement remains true if we replace D ( ) γ,δ,ξ by D ◦ γ,δ,ξ .Proof. By Lemma 6.1 and Proposition 3.17, we have that with ( c · ι )-high probabilitymin x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ ( x, y ) · δ ι ≤ min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ,ξ ( x, y ) ≤ min x ∈ A δ , y ∈ B δ D (cid:48) γ,δ ( x, y ) · δ − ι . c · ι )-highprobability min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ,ξ ( x, y ) · δ ι ≤ min x ∈ A δ ,y ∈ B δ D ( ) γ,δ,ξ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ,ξ ( x, y ) · δ − ι , min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ,ξ ( x, y ) · δ ι ≤ min x ∈ A δ ,y ∈ B δ D ◦ γ,δ,ξ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δ,ξ ( x, y ) · δ − ι . (139)The proof of (139) is similar to that of Proposition 3.2. Thus, we only briefly discuss how to adaptthe proof of Proposition 3.2.For D ( ) γ,δ,ξ , since D ( ) γ,δ,ξ ≥ D γ,δ,ξ , it remains to bound D ( ) γ,δ,ξ by D (cid:48) γ,δ,ξ from above. We repeat theproof of Proposition 3.2, but with the following change: we will now define a new version of Φ B,δ (similar to that in Definition 3.6) to be the minimal number of Euclidean balls B with M γ (2 B ) ≤ δ that covers ∂B . (The only difference is that we used M γ (2 B ) in the preceding definition as opposedto M γ ( B ) as in Definition 3.6.) One can then just repeat the arguments with this version of Φ B,δ to conclude the proof on the upper bound — the only place that needs to be changed is in theproof of (50) and (51), where the required change is noting but enlarging a few constants whichhave been absorbed by much larger terms in the earlier proof.Next, we consider D ◦ γ,δ,ξ . By [13, Proposition 3.2] (which states that the circle average processand our ˆ h -process are close to each other) and Lemma 2.8, we get that with high probabilitymax j :2 − j ≥ δ C mc+10 max x ∈ V ξ | η − j ( x ) − h − j ( x ) | = O ( (cid:112) log δ − ) . This, together with Lemma 3.4, implies that with high probabilitymin x ∈ A δ ,y ∈ B δ D (cid:48) γ,δe (log δ − . ,ξ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D ◦ γ,δ,ξ ( x, y ) ≤ min x ∈ A δ ,y ∈ B δ D (cid:48) γ,δe − (log δ − . ,ξ ( x, y ) . Combining Lemma 3.5, we complete the proof of (139), and thus the proof of the proposition.
References [1] R. J. Adler.
An Introduction to Continuity, Extrema and Related Topics for General GaussianProcesses . Lecture Notes - Monograph Series. Institute of Mathematical Statistics, Hayward,CA, 1990.[2] S. Andres and N. Kajino. Continuity and estimates of the Liouville heat kernel with applica-tions to spectral dimensions.
Prob. Th. Rel. Fields , 166:713–752, 2016.[3] N. Berestycki. Diffusion in planar Liouville quantum gravity.
Ann. Inst. Henri Poincar´eProbab. Stat. , 51(3):947–964, 2015.[4] N. Berestycki. An elementary approach to Gaussian multiplicative chaos.
Electron. Commun.Probab. , 22:Paper No. 27, 12, 2017.[5] N. Berestycki. Introduction to the Gaussian free field and Liouville quantum gravity. Availableat , 2017.[6] N. Berestycki, C. Garban, R. Rhodes, and V. Vargas. KPZ formula derived from Liouvilleheat kernel.
J. Lond. Math. Soc. (2) , 94(1):186–208, 2016.507] M. Biskup, J. Ding, and S. Goswami. Random walk in two-dimensional exponenti-ated gaussian free field: recurrence and return probability. 2016. Preprint, available at https://arxiv.org/abs/1611.03901 .[8] C. Borell. The Brunn-Minkowski inequality in Gauss space.
Invent. Math. , 30(2):207–216,1975.[9] A. Cortines, J. Gold, and O. Louidor. Dynamical freezing in a spin glass system with loga-rithmic correlations. 2016. Preprint, available at https://arxiv.org/abs/1605.08392 .[10] F. David and M. Bauer. Another derivation of the geometrical KPZ relations.
J. Stat. Mech.Theory Exp. , (3):P03004, 9, 2009.[11] A. Dembo and O. Zeitouni.
Large deviations techniques and applications , volume 38 of
Stochas-tic Modelling and Applied Probability . Springer-Verlag, Berlin, 2010. Corrected reprint of thesecond (1998) edition.[12] J. Ding and A. Dunlap. Liouville first passage percolation: subsequential scaling limits at hightemperatures. 2016. To appear, Annals of Probability.[13] J. Ding and S. Goswami. Upper bounds on Liouville first passage percolation and Watabiki’sprediction. 2016. Preprint, available at https://arxiv.org/abs/1610.09998 .[14] J. Ding, O. Zeitouni, and F. Zhang. On the Liouville heat kernel for k-coarse MBRW andnonuniversality.
Elec. J. Probab. , 23:paper 62, 1–20, 2018.[15] J. Ding and F. Zhang. Non-universality for first passage percolation on the exponential oflog-correlated Gaussian fields.
Probability Theory and Related Fields , 2015. to appear.[16] J. Ding and F. Zhang. Liouville first passage percolation: geodesic dimensionis strictly larger than 1 at high temperatures. 2017. Preprint, available at https://arxiv.org/abs/1711.01360 .[17] B. Duplantier and S. Sheffield. Liouville quantum gravity and KPZ.
Invent. Math. , 185(2):333–393, 2011.[18] X. Fernique. Regularit´e des trajectoires des fonctions al´eatoires gaussiennes. In ´Ecole d’ ´Et´e deProbabilit´es de Saint-Flour, IV-1974 , pages 1–96. Lecture Notes in Math., Vol. 480. Springer,Berlin, 1975.[19] C. Garban, R. Rhodes, and V. Vargas. On the heat kernel and the Dirichlet form of LiouvilleBrownian motion.
Electron. J. Probab. , 19:no. 96, 25, 2014.[20] C. Garban, R. Rhodes, and V. Vargas. Liouville Brownian motion.
Ann. Probab. , 44(4):3076–3110, 2016.[21] E. Gwynne, N. Holden, and X. Sun. A distance exponent for Liouville quantum gravity. 2016.Preprint, available at http://arxiv.org/abs/1606.01214 .[22] J. M. Hammersley. Generalization of the fundamental theorem on sub-additive functions.
Proc.Cambridge Philos. Soc. , 58:235–238, 1962. 5123] J.-P. Kahane. Sur le chaos multiplicatif.
Ann. Sci. Math. Qu´ebec , 9(2):105–150, 1985.[24] M. Ledoux.
The Concentration of Measure Phenomenon . American Mathematical Society,Providence, RI, 2001.[25] P. Maillard, R. Rhodes, V. Vargas, and O. Zeitouni. Liouville heat kernel: regularity andbounds.
Ann. Inst. Henri Poincar´e Probab. Stat. , 52(3):1281–1320, 2016.[26] J. Miller and S. Sheffield. Liouville quantum gravity and the Brownian map I: The QLE(8/3,0)metric. 2015. Preprint, available at http://arxiv.org/abs/1507.00719 .[27] J. Miller and S. Sheffield. Liouville quantum gravity and the Brownian map II: The QLE(8/3,0)metric. 2016. Preprint, available at https://arxiv.org/abs/1605.03563 .[28] J. Miller and S. Sheffield. Liouville quantum gravity and the Brownian map III: the conformalstructure is determined. 2016. Preprint, available at https://arxiv.org/abs/1608.05391 .[29] J. Miller and S. Sheffield. Quantum Loewner evolution.
Duke Math. J. , 165(17):3241–3378,2016.[30] R. Rhodes and V. Vargas. KPZ formula for log-infinitely divisible multifractal random mea-sures.
ESAIM Probab. Stat. , 15:358–371, 2011.[31] R. Rhodes and V. Vargas. Gaussian multiplicative chaos and applications: a review.
Probab.Surv. , 11:315–392, 2014.[32] R. Rhodes and V. Vargas. Spectral dimension of Liouville quantum gravity.
Ann. HenriPoincar´e , 15(12):2281–2298, 2014.[33] R. Rhodes and V. Vargas. Lecture notes on gaussian multiplicative chaos and Liouville quan-tum gravity. 2016. Preprint, available at http://arxiv.org/abs/1602.07323 .[34] R. Robert and V. Vargas. Gaussian multiplicative chaos revisited.
Ann. Probab. , 38(2):605–631, 2010.[35] A. Shamov. On Gaussian multiplicative chaos.
J. Funct. Anal. , 270(9):3224–3261, 2016.[36] S. Sheffield. Gaussian free fields for mathematicians.
Probab. Theory Related Fields , 139(3-4):521–541, 2007.[37] V. N. Sudakov and B. S. Cirel (cid:48) son. Extremal properties of half-spaces for spherically invariantmeasures.