Upper bounds on the one-arm exponent for dependent percolation models
UUPPER BOUNDS ON THE ONE-ARM EXPONENT FORDEPENDENT PERCOLATION MODELS
VIVEK DEWAN AND STEPHEN MUIRHEAD Abstract.
We prove upper bounds on the one-arm exponent η for dependent percolationmodels; while our main interest is level set percolation of smooth Gaussian fields, the argumentsapply to other models in the Bernoulli percolation universality class, including Poisson-Voronoiand Poisson-Boolean percolation. More precisely, in dimension d = 2 we prove η ≤ / η ≤ d/ η ≤ d − Introduction
The critical phase of percolation models is believed (see, e.g., [23, Chapter 9]) to be describedby critical exponents which govern the power-law behaviour of macroscopic observables at, ornear, criticality. In this paper we consider the one-arm exponent ; we introduce this in theclassical setting of Bernoulli percolation, before generalising to a class of dependent percolationmodels induced by the excursion sets of smooth Gaussian fields (‘Gaussian percolation’).Fix a dimension d ≥
2, consider the lattice Z d = ( V , E ), and declare each edge e ∈ E to be‘open’ independently with probability p ∈ [0 , P p of the open subset of E isknown as Bernoulli percolation on Z d with parameter p . Defining the connection event { A ←→ B } := { there exists a path of open edges that intersects A and B } where A, B ⊂ V , and denoting by Λ R := [ − R, R ] d ⊂ V the box of size R , it is well known [23]that there exists p c = p c ( d ) ∈ (0 , p c (2) = 1 / p c ( d ) < / d ≥
3, such that θ ( p ) := P p [0 ←→ ∞ ] := lim R →∞ P p [0 ←→ ∂ Λ R ] = (cid:40) p < p c ,> p > p c . Although it is still open to prove θ ( p c ) = 0 for d ≥
3, it has been shown that [11, 2](1.1) θ ( p ) ≥ c ( p − p c )for a constant c = c ( d ) and p > p c sufficiently close to p c ; this is known as the mean-field lowerbound and is expected to be tight for dimensions d ≥ d c = 6 in which critical exponents taketheir mean-field value.At criticality p = p c it is believed that connection probabilities between scales obey a powerlaw, in the sense that there exists η > R → ∞ and for r = o ( R ),(1.2) P p c [Λ r ←→ ∂ Λ R ] = ( r/R ) − η + o (1) . Institut Fourier, Universit´e Grenoble Alpes School of Mathematics and Statistics, University of Melbourne
E-mail addresses : [email protected], [email protected] . Date : February 25, 2021.2010
Mathematics Subject Classification.
Key words and phrases.
Percolation, critical exponents, Gaussian fields. We allow the path to be empty, so that { A ←→ B } occurs if A ∩ B (cid:54) = ∅ . a r X i v : . [ m a t h . P R ] F e b UPPER BOUNDS ON THE ONE-ARM EXPONENT
While the existence of the one-arm exponent η is not known rigorously, since we are interestedin upper bounds we define(1.3) η := lim inf R →∞ − log P p c [0 ←→ ∂ Λ R ]log R .
Clearly upper bounds on (1.3) imply upper bounds on the exponent in (1.2) assuming its ex-istence. Note however that the choice of lim inf, rather than lim sup, in the definition of η isdeliberate and yields a priori weaker upper bounds (see however Remark 1.3).The phenomenon of universality suggests that a wide class of dependent percolation modelsbehave similarly to Bernoulli percolation at, or near, criticality, and in particular η should beidentical inside this class. In this paper we consider the following class of dependent models.Let f be a continuous stationary-ergodic centred Gaussian field on R d , and for (cid:96) ∈ R write P (cid:96) [ · ]to denote P [ f + (cid:96) ∈ · ] (abbreviating P = P ). Then the excursion sets { f + (cid:96) ≥ } := { x ∈ R d : f ( x ) + (cid:96) ≥ } induce a stationary ergodic percolation model on R d via the connectivity relation { A ←→ B } := { there exists a path in { f ≥ } that intersects A and B } for closed sets A, B ⊂ R d . Recalling the box Λ R := [ − R, R ] d (now considered a subset of R d ),by monotonicity there exists (cid:96) c = (cid:96) c ( f ) ∈ [ −∞ , ∞ ] such that θ ( (cid:96) ) := P (cid:96) [Λ ←→ ∞ ] := lim R →∞ P (cid:96) [Λ ←→ ∂ Λ R ] = (cid:40) (cid:96) < (cid:96) c ,> (cid:96) > (cid:96) c , where the choice of { Λ ←→ ∂ Λ R } rather than { ←→ ∂ Λ R } is to avoid the possibility of localobstructions (relevant only in the case that the FKG inequality is not available; see the commentsafter (POS’)). Under general conditions it is known that (cid:96) c = 0 if d = 2 and (cid:96) c ∈ ( −∞ ,
0] if d ≥ η := lim inf R →∞ − log P (cid:96) c [Λ ←→ ∂ Λ R ]log R .
In this case the mean-field lower bound (1.1) has not yet been established; indeed in this paperwe prove it for Gaussian fields with finite-range dependence .1.1.
Upper bounds on the one-arm exponent.
We now present our main results, whichare upper bounds on η . We begin with Bernoulli percolation; although the results are not newin this case, they are illustrative for general models. Theorem 1.1.
For Bernoulli percolation on Z d , η ≤ (cid:40) / d = 2 ,d/ d ≥ . Remark . If d = 2, the bound η ≤ / which established that P p c [0 ←→ ∂ Λ R ] ≥ cR − / . Recently this was improved to ≥ cR − / [14], giving η ≤ /
6. It isbelieved that η = 5 /
48, but this is known rigorously only for very specific models [47].In general dimension the hyperscaling inequality η ≤ d/ (1 + δ ) has been established rigor-ously [9], where δ is the critical exponent governing the volume of critical clusters. In light ofthe mean-field bound δ ≥ η ≤ d/ In high dimension d ≥
11 it is knownthat η takes its mean-field value η = 2 [31, 20], see also Corollary 1.2 below. It is believed that η = (cid:40) . . . . d = 3 , . . . . d = 4 , . . . . d = 5 , d ≥ d c = 6 . In the paper the argument is attributed to van den Berg. Although [9] assumes the existence of the exponent δ , one can extract the unconditional bound η ≤ d/ PPER BOUNDS ON THE ONE-ARM EXPONENT 3
In particular, the bound η ≤ d/ d c = 6(indeed, it implies d c ≥ η = 2 cannot occur for d ≤ η , in d = 2 one can use RSW estimates to prove that η > ε (whichcan be quantified, but is small), however if d ∈ { , , , } it is still wide open to prove η > Remark . For the bound η ≤ / d = 2, we could replace the liminf in the definition of η in (1.3) with limsup, since the argument yields P p c [0 ←→ ∂ Λ R ] ≥ cR − / , see (2.9). In fact, theargument gives the stronger bound P p c [ A ( R )] ≥ cR − / , where A ( R ) is the (polychromatic)two-arm event ; see Section 2 for the definition and more details.Similarly, if d ≥ P p c [0 ←→ ∂ Λ R ] ≥ cR − d/ by workingunder an (unproven) assumption that critical ‘box-crossing’ probabilities do not converge to 1,which to our knowledge is a new inference; see Remark 2.6 for details. Note that this assumptionis expected to be true if d < d c = 6, but likely not if d ≥ δ ≥
2, and in the case of the stronger bound η ≤ / d = 2, on the ‘parafermionic observable’), and hence do not extend easily to dependentpercolation models. On the contrary, we give a new proof of Theorem 1.1 that extends naturallyto a wide class of dependent models; our next result illustrates this for Gaussian percolation.Let us begin by stating some assumptions. Recall that f is a continuous stationary-ergodiccentred Gaussian field. We will always assume that f has a spatial moving average representation f = q (cid:63) W , where q ∈ L ( R d ) (cid:54) = 0 is Hermitian (i.e. q ( x ) = q ( − x )), W is the white noiseon R d , and (cid:63) denotes convolution; a sufficient condition is that the covariance kernel K ( · ) := E [ f (0) f ( · )] = ( q (cid:63) q )( · ) is in L ( R d ), since then we may define q := F [ √ ρ ], where F dentotes theFourier transform and ρ = F [ K ] ∈ C ( R d ) is the spectral density of the field.For our main results we will further assume that q satisfies the following basic properties: Assumption 1.4 (Basic assumptions on the Gaussian field) . (a) (Regularity) q is three-times differentiable and each of these derivatives is in L ( R d ) .(b) (Decay of correlations, with parameter β > d ) There exists a c > such that, for all x ∈ R d , max {| q ( x ) | , |∇ q ( x ) |} ≤ c | x | − β . (c) (Symmetry) q is symmetric under negation and permutation of the coordinate axes. Let us explain some consequences of Assumption 1.4. The regularity condition implies that K = q (cid:63) q ∈ C ( R d ), and hence f is C -smooth almost surely (see [1, Theorem 1.4.1]). Thedecay condition implies that q ∈ L ( R d ) and so also K ∈ L ( R d ), which ensures that the spectraldensity is continuous and ( f, ∇ f, ∇ f ) is non-degenerate (i.e. its evaluation on a finite numberof distinct points is a non-degenerate Gaussian vector, see [6, Lemma A.2]). The symmetryassumption is crucial in d = 2 (for instance, to prove RSW estimates), but it also simplifiessome aspects of the proof in all dimensions. Finally, as we mentioned above, if d = 2 thenAssumption 1.4 is sufficient to prove that (cid:96) c = 0 (see [37, Theorem 1.3] and Remark 1.9 therein).For most of our results we also assume(POS) (cid:90) q := (cid:90) R d q ( x ) dx > . This is equivalent to the spectral density being positive at the origin, and is a natural assumptionwhen studying how properties of a Gaussian field change with the level; see e.g. [38, 5].For some of our results we further assume that f is positively correlated :(POS’) K ( x ) = ( q (cid:63) q )( x ) ≥ x ∈ R d .This is equivalent to the FKG inequality holding for the field (i.e. the field is positively associ-ated ), so that events increasing with respect to the field are positively correlated [41]. Note that Although in [41] this is proven only for finite Gaussian vectors , one can deduce positive associations for allincreasing events considered in this paper via standard approximation arguments, see [43, Lemma A.12].
UPPER BOUNDS ON THE ONE-ARM EXPONENT (POS’) is stronger than (POS) (the former implies that the spectral density is positive definite,and so strictly positive at the origin unless K = 0).Recall that the mean-field lower bound (1.1) is not known for Gaussian percolation. Weintroduce it as an assumption: There exists c > (cid:96) > (cid:96) c sufficiently close to (cid:96) c ,(MFB) θ ( (cid:96) ) := P (cid:96) [Λ ←→ ∞ ] ≥ c ( (cid:96) − (cid:96) c ).While we expect (MFB) to hold in great generality, in this paper we prove it only for finite-rangedependent fields, see Theorem 1.14 below.For Gaussian percolation we prove the following upper bounds on the one-arm exponent: Theorem 1.5.
Suppose f = q (cid:63) W satisfies Assumption 1.4 with parameter β and (POS) .(1) If d ≥ and β > d − , then η ≤ min { d − α (3 d − α , d − } where α = d − β − d +4 .(2) If d ≥ , β > d − , and (MFB) holds, then η ≤ min { max { d + α ( d − , α (2 d − } , d − } where α = d − β − d .(3) If d = 2 , β > , and (POS’) holds, then η ≤ min { + β − , } . To illustrate Theorem 1.5 consider the example of the
Bargmann-Fock field with covariancekernel K ( x ) = e −| x | / (see [4] for background and motivation), which is easily seen to satisfyAssumption 1.4 for every parameter β and also (POS’). According to the Harris criterion (see[51], or [7] for further discussion), it is expected that Gaussian percolation is in the Bernoullipercolation universality class if K ( x ) (cid:28) c | x | − /ν where ν = ν ( d ) is the correlation lengthexponent of Bernoulli percolation (see Section 1.2). In particular the Bargmann-Fock field isexpected to belong to this class, and hence possess the same exponents as Bernoulli percolation. Corollary 1.6.
Suppose f = q (cid:63) W satisfies Assumption 1.4 for every parameter β and (POS’) (e.g. the Bargmann-Fock field). Then η ≤ (cid:40) / d = 2 ,d − d ≥ . Further, if (MFB) holds then η ≤ d/ .Proof. Take β → ∞ in Theorem 1.5. (cid:3) One can also consider the example of finite-range dependent fields, i.e. for which(BOU) q has bounded support,noting that this supersedes the second condition in Assumption 1.4. Corollary 1.7.
Suppose f = q(cid:63)W satisfies Assumption 1.4 and (POS) – (BOU) . Then η ≤ d/ .Proof. Take β → ∞ in the second statement of Theorem 1.5 ((MFB) holds by Theorem 1.14). (cid:3) Remark . Previously for Gaussian percolation it was known only that η ≤ / d = 2, and η ≤ d − d ≥ β . Notably, as for Bernoulli percolation, the bound d/ d = 6. We emphasise that Corollary 1.7 does not assume positive correlations, andso applies to a class of models that lack positive associations. Remark . As in Remark 1.3, if d = 2 we could replace the liminf in the definition of η with limsup, since the proof yields polynomial lower bounds on P (cid:96) c [Λ ←→ ∂ Λ R ] (see (3.12)).Indeed the proof gives polynomial lower bounds on the two-arm event ; for example for theBargmann-Fock field we prove that, for every ε > c > P (cid:96) c [ { there exists a path in { f = 0 } that intersects Λ and ∂ Λ R } ] ≥ cR − / − ε . PPER BOUNDS ON THE ONE-ARM EXPONENT 5
Relations to other critical exponents.
The methods used to prove the above resultsalso give bounds on η in terms of other critical exponents. For simplicity we state these onlyfor Bernoulli percolation, but similar bounds can be proven for Gaussian percolation (which,under Assumption 1.4 and (POS’)–(BOU), would match those in Theorem 1.10 below).Let us introduce the relevant exponents. Recall the mean-field lower bound (1.1) on θ ( p ). Itis expected that θ ( p ) → p ↓ p c ; although this is not known rigorously (exceptin high dimension), we will assume that the corresponding exponent exists β = lim p ↓ p c log θ ( p )log | p c − p | ∈ (0 , . Below criticality p < p c , it is known that connection probabilities decay exponentially [34, 2, 18]and that the limit 1 ξ ( p ) := lim R →∞ − log P p [0 ←→ ∂ Λ R ] R ∈ (0 , ∞ )exists [23, Theorem 6.10]. The correlation length ξ ( p ) is expected to diverge as a power law as p ↑ p c , and we will again assume that the corresponding exponent exists ν = lim p ↑ p c − log ξ ( p )log | p c − p | ∈ (0 , ∞ ) . Similarly, as p ↑ p c the susceptibility χ ( p ) := (cid:80) v ∈ Z d P p [0 ←→ v ] < ∞ is expected to diverge asa power law, and we will assume the existence of γ = lim p ↑ p c − log χ ( p )log | p c − p | ∈ (0 , ∞ ) . Finally we also assume that the critical two-point function decays as a power law with exponent d − η := lim | v | ∞ →∞ − log P p c [0 ←→ v ]log | v | ∞ ∈ (0 , ∞ ) , where | · | ∞ denotes the sup-norm. It is well known that ν ≥ /d [10], γ ≥ η ≤ Theorem 1.10.
For Bernoulli percolation on Z d , assuming the existence of β, ν, γ and η , (1.5) 2 − γν ≤ η ≤ ¯ η ≤ min (cid:110) d − ν , − η /β − (cid:111) , where ¯ η is defined as in (1.3) with limsup replacing liminf. Moreover η ≤ d /β + 1 and ¯ η ≤ − ν , if d = 2 . Remark . To our knowledge the bounds in (1.5) are new even for Bernoulli percolation, and η ≥ − γν may be of particular interest as a lower bound on η . The bound η ≤ d /β +1 is impliedby the hyperscaling inequality in [9], and for ¯ η ≤ − ν if d = 2 see [29, 50].For Bernoulli percolation in sufficiently high dimension it is known that the exponents ν , γ and η exist and take their mean-field values ν = 1 / γ = 1 [3], and η = 0 [27]. HenceTheorem 1.10 gives a new proof of the result of Kozma and Nachmias that η = 2 in highdimension: Corollary 1.12 ([31]) . For Bernoulli percolation on Z d , there exists d > such that, if d ≥ d , lim R →∞ − log P p c [0 ←→ ∂ Λ R ]log R = 2 . Remark . Our argument is significantly simpler than the one in [31], however it yields only c R − ≤ P p c [0 ←→ ∂ Λ R ] ≤ c R − (log R ) whereas [31] proved that P p c [0 ←→ ∂ Λ R ] (cid:16) R − in the sense of bounded ratios (see Remark 2.7).Another difference is that we deduce η = 2 in any dimension from the bounds ν ≤ / η ≥ γ/ν ≤ − η ), whereas the argument in [31]uses as input d > η = 0 (or more precisely P p c [0 ←→ v ] (cid:16) | v | − d +2 ∞ ). UPPER BOUNDS ON THE ONE-ARM EXPONENT
Sharpness of the phase transition for smooth Gaussian fields.
As well as boundson the one-arm exponent, a second aim of this paper is to establish the sharpness of the phasetransition for smooth finite-range dependent Gaussian fields, and in addition verify the mean-field lower bound (MFB) for such fields. For this we adapt the celebrated argument of Duminil-Copin, Raoufi and Tassion [16] by exploiting a new ‘Russo-type inequality’ for smooth Gaussianfields (see Proposition 4.1); we expect this inequality will have further applications.
Theorem 1.14 (Sharpness of the phase transition and mean-field lower bound) . Suppose f = q (cid:63) W is C -smooth and satisfies (BOU) . Then for every (cid:96) < (cid:96) c there exist c , c > such that,for R ≥ , P (cid:96) [Λ ←→ ∂ Λ R ] ≤ c e − c R . Moreover, the mean-field lower bound (MFB) holds.Remark . For Gaussian percolation the sharpness of the phase transition was only knownso far in two cases: (i) in d = 2 assuming Assumption 1.4 and (POS’) [38], and (ii) for certain discrete Gaussian fields on Z d satisfying (POS’) [13]. The mean-field lower bound (MFB) wasnot known for any smooth Gaussian fields. We emphasise that in Theorem 1.14 we do notassume (POS’), so this theorem holds for a class of fields lacking positive associations. Remark . Clearly if (BOU) holds then f is finite-range dependent, but we do not knowwhether every finite-range dependent f can be represented as q (cid:63) W for q with bounded support(although this seems very natural, and it is true if d = 1, see [19]). If we demand in additionthat q be supported on half of the support of K then, rather surprisingly, this is false [19]. Onthe other hand, under (POS’) it is true [19, Corollary 3.2]. Moreover, it is known [45] that if f is finite-range dependent and isotropic (i.e. K is rotationally symmetric) it can be representedas a countable sum of independent f i = q i (cid:63) W i for q i with uniformly bounded support. Since itis straightforward to extend our proof of Theorem 1.14 to handle such fields, the conclusions ofTheorem 1.14 (and Corollary 1.7) also hold if f is smooth, finite-range dependent, and isotropic .1.4. Other models.
Other than Bernoulli percolation and level set percolation of Gaussianfields, the arguments adapt naturally to many other models in the Bernoulli percolation univer-sality class. For instance, both Poisson-Voronoi and Poisson-Boolean percolation can be treatedin a very similar way (although in the latter case the obtained bounds may depend on thedecay of the radius distribution, and also some of our arguments in d = 2 do not apply sincethe model lacks self-duality). Indeed the necessary tools to apply the OSSS inequality in thesesettings, analogous to the arguments in Section 4, have already been developed in [15] and [17]respectively. For brevity we do not discuss details here.While this work was begin finalised we learned that similar arguments to those we use to prove η ≤ / d = 2 were previously used in the general setting of increasing Boolean functions [8];see Section 2.4 for a statement of the relevant result from [8] and a comparison to what we prove.1.5. Outline of the paper.
In Section 2 we study Bernoulli percolation and give the proofof Theorems 1.1 and 1.10. In Section 3 we adapt the arguments to the Gaussian setting, andgive the proof of Theorem 1.5 subject to an auxiliary result (Proposition 3.9). In Section 4 weestablish the Russo-type inequality for smooth Gaussian fields mentioned above, and apply it toprove Proposition 3.9 and Theorem 1.14. The appendix contains a technical result on orthogonaldecompositions of Gaussian fields.1.6.
Acknowledgements.
The second author was partially supported by the Australian Re-search Council (ARC) Discovery Early Career Researcher Award DE200101467. The authorsthank Damien Gayet, Tom Hutchcroft, Ioan Manolescu and Hugo Vanneuville for helpful dis-cussions, comments on an earlier draft, and for pointing out references [14] (Ioan) and [8, 49](Hugo). This work was initiated while the first author was visiting Queen Mary University ofLondon and we thank the University for its hospitality.
PPER BOUNDS ON THE ONE-ARM EXPONENT 7 Bernoulli percolation
In this section we focus on Bernoulli percolation, which serves as a template for the extensionof the arguments to dependent percolation models.Let us begin by introducing notation for connection events. For k, R >
0, define the box B k ( R ) := [ − R, R ] × [ − kR, kR ] d − ⊂ E , and the ‘box-crossing event’Cross k ( R ) := (cid:110) {− R } × [ − kR, kR ] d − B k ( R ) ←→ { R } × [ − kR, kR ] d − (cid:111) , where { A E ←→ B } := { there exists a path of open edges in E ⊂ E that intersects A and B } .For R ≥
0, define the one-arm event A ( R ) := { ←→ ∂ Λ R } . Restricting for a moment to d = 2, we also introduce the (polychromatic) two-arm event A ( R )that was mentioned in Remark 1.3. Consider the dual lattice ( Z ) ∗ ; in this graph an edge isconsidered open if and only if the unique edge e ∈ E that it crosses is closed (i.e. not open).Note that each vertex v ∈ V has four neighbouring dual vertices, and for A ⊂ V let A (cid:63) be theunion of these neighbours over v ∈ A . For A, B ⊆ V define { A E ⇐⇒ B } = { A E ←→ B } ∩ { there exists a dual path in E that intersects A (cid:63) and B (cid:63) } , where a dual path in E is a path of dual edges that cross closed edges in E , and abbreviate { A ⇐⇒ B } = { A Z ⇐⇒ B } . For R ≥
0, define A ( R ) := { ⇐⇒ ∂ Λ R } and η := lim inf R →∞ − log P p c [ A ( R )]log R .
We make the elementary observation that(2.1) η ≥ η where η is defined in (1.3). To see this, note that by the FKG inequality P p [ A ( R )] ≤ P p [ A ( R )] P p [ { there exists a dual path that intersects 0 ∗ and Λ (cid:63)R } ] . Since Bernoulli percolation on Z is self-dual at p c = 1 /
2, and by translation invariance, P p c [ { there exists a dual path that intersects 0 ∗ and Λ (cid:63)R } ] ≤ P p c [ A ( R − . Hence P p c [ A ( R )] ≤ P p c [ A ( R − , and (2.1) follows immediately.Let us return to the general setting of Bernoulli percolation on Z d . The case d ≥ Proposition 2.1.
For < p ≤ q < and R ≥ , P q [ A ( R )] − P p [ A ( R )] ≤ max (cid:110) √ (cid:112) q (1 − q ) , √ (cid:112) p (1 − p ) (cid:111) ( q − p ) (cid:115) P q [ A ( R )] (cid:88) v ∈ Λ R P p [0 ←→ v ] . For the case d = 2 of Theorem 1.1 we rely instead on the following inequalities: Proposition 2.2.
Let k ≥ . Then there exists c > such that, for p ∈ (0 , and R ≥ , ddp P p [ Cross k ( R )] ≤ cR d/ (cid:112) p (1 − p ) × (cid:40)(cid:112) P p [ A ( R )] d = 2 , (cid:112) P p [ A ( R )] d ≥ . Proposition 2.3.
Let d = 2 and k ≥ . Then there exists c > such that, for p ∈ (0 , and R ≥ , (2.2) ddp P p [ Cross k ( R )] ≥ cp (1 − p ) P p [ Cross / (8 k ) ( kR )] (cid:0) − P p [ Cross k ( R/ (cid:1) P p [ A ( R )] . We prove Propositions 2.1–2.3 later in the section; for now we complete the proof of our mainresults (Theorems 1.1 and 1.10). First we recall some standard facts:
UPPER BOUNDS ON THE ONE-ARM EXPONENT
Lemma 2.4. (1) There exists δ > and p (cid:48) = p (cid:48) ( R ) ≤ p c such that, for R ≥ , P p (cid:48) [ Cross ( R )] = δ. (2) (RSW) Let d = 2 and k > . Then there exists δ > such that, for R ≥ , P p c [ Cross k ( R )] ∈ ( δ, − δ ) . Proof.
For the first statement, a classical bootstrapping argument (see, e.g., [28, Section 5.1])shows that P p c [Cross ( R )] > δ , and the result follows by continuity in p . The second statementamounts to the classical RSW estimates. (cid:3) Proof of Theorem 1.1.
In the proof c > d ≥
3. We may assume that P p c [ A ( R )] → R → ∞ since otherwise η = 0. Define q = q ( R ) > p c such that P q [ A ( R )] = min { P p c [ A ( R )] , } , which exists since p (cid:55)→ P p [ A ( R )] is continuous and strictly increasing. Note that q ( R ) → p c as R → ∞ since otherwise lim sup R →∞ P p c [ A ( R )] ≥ lim sup R →∞ θ ( q ( R )) / > . By the mean-field lower bound (1.1), for sufficiently large R (2.3) P p c [ A ( R )] = P q [ A ( R )] / ≥ θ ( q ) / ≥ c ( q − p c ) . Now apply Proposition 2.1 to p = p c and q = q ( R ); this yields P p c [ A ( R )] = P q [ A ( R )] − P p c [ A ( R )] ≤ c ( q − p c ) (cid:115) P p c [ A ( R )] (cid:88) v ∈ Λ R P p c [0 ←→ v ]for large R . Combining with (2.3), we deduce that(2.4) P p c [ A ( R )] (cid:88) v ∈ Λ R P p c [0 ←→ v ] ≥ c for all R ≥ η ≤ d/ η = 0 there is nothing to prove, so assume η > η ∗ ∈ (0 , η ). Then by the definition of η (2.5) P p c [ A ( R )] ≤ R − η ∗ for large R ; in particular, via an integral comparison,(2.6) (cid:88) v ∈ Λ R P p c [ A ( (cid:98)| v | ∞ / (cid:99) )] ≤ max { R d − η ∗ , } (log R )for large R . Next observe that { ←→ v } implies the occurrence of { ←→ Λ (cid:98)| v | ∞ / (cid:99) } and { v ←→ v + Λ (cid:98)| v | ∞ / (cid:99) } which depend on disjoint subsets of edges. Hence by translation invariance and (2.6)(2.7) (cid:88) v ∈ Λ R P p c [0 ←→ v ] ≤ (cid:88) v ∈ Λ R P p c [ A ( (cid:98)| v | ∞ / (cid:99) )] ≤ max { R d − η ∗ , } (log R )for large R , and so c ≤ P p c [ A ( R )] (cid:88) v ∈ Λ R P p c [0 ←→ v ] ≤ R − η ∗ max { R d − η ∗ , } (log R ) . This implies η ∗ ≤ d/
3, and since η ∗ < η was arbitrary, we deduce η ≤ d/ PPER BOUNDS ON THE ONE-ARM EXPONENT 9
We now turn to the case d = 2. By Propositions 2.2–2.3 and the RSW estimates (the secondstatement of Lemma 2.4), for large Rc P p c [ A ( R )] ≤ ddp P p [Cross ( R )] (cid:12)(cid:12)(cid:12) p = p c ≤ cR (cid:113) P p c [ A ( R )]which yields, for large R ,(2.8) P p c [ A ( R )] ≥ cR − / . By the discussion after (2.1), this implies(2.9) P p c [ A ( R )] ≥ cR − / for large R , and hence η ≤ / (cid:3) Remark . One could replace the right-hand side of (2.2) with the (perhaps simpler) expression cp (1 − p ) P p [Cross k ( R )] (cid:0) − P p [Cross k ( R )] (cid:1) R (cid:80) Ri,j =0 P p c [ A (min { i, j } )] . While this suffices to prove η ≤ /
3, it does not yield the stronger bounds (2.8)–(2.9).
Remark . In the case d ≥ P p c [ A ( R )] ≥ cR − d/ . However, asmentioned in Remark 1.3, one can obtain this by working under a ‘box-crossing’ assumption.First, by modifying the proof of Proposition 2.3 one can prove that, for every k ≥ c > p ∈ (0 ,
1) and R ≥ ddp P p [Cross k ( R )] ≥ cp (1 − p ) P p [Cross k ( R )] (cid:0) − P p [Cross k ( R/ (cid:1) P p [ A ( R )] . Next assume the following box-crossing property: For every k ≥ δ ∈ (0 ,
1) there are δ ∈ (0 ,
1) and R > R ≥ R and p ≤ p c ,(BOX) P p [Cross k ( R )] < − δ = ⇒ P p [Cross k ( R/ < − δ . Then by working on the sequence p (cid:48) = p (cid:48) ( R ) ≤ p c at which P p (cid:48) [Cross ( R )] = δ , guaranteed by thefirst statement of Lemma 2.4, and comparing upper and lower bounds on ddp P p [Cross ( R )] | p = p (cid:48) ,one deduces the result.Note that (BOX) states roughly that if box-crossings do not occur with high probability forone aspect ratio, then they do not occur with high probability for other aspect ratios. This isknown in d = 2 by the RSW estimates in Lemma 2.4, and is strongly believed to hold if d < lower bounds on η . Proof of Theorem 1.10.
In the proof c > o (1) denotes a quantity that decays to zero as R → ∞ .We begin with the bounds η ≤ d/ (2 /β + 1) and ¯ η ≤ (2 − η ) / (2 /β −
1) which require only aslight change to the argument used to prove η ≤ d/ q ( R ) → p c bedefined as in (2.3). By the definition of the exponent β , one can replace (2.3) with P p c [ A ( R )] ≥ θ ( q ) / ≥ c ( q − p c ) β + o (1) , which gives, in place of (2.4),(2.10) P p c [ A ( R )] β − o (1) (cid:88) v ∈ Λ R P p c [0 ←→ v ] ≥ c, for large R . Then using (2.7), for any η ∗ ∈ (0 , η ) and large R we have R − η ∗ ( β − o (1) max { R d − η ∗ , } (log R ) ≥ c, which implies η ≤ d/ (2 /β + 1). On the other hand, by the definition of the exponent η , (cid:88) v ∈ Λ R P p c [0 ←→ v ] = R − η + o (1) , which by (2.10) implies P p c [ A ( R )] ≥ R − (2 − η ) / (2 /β − o (1) and hence ¯ η ≤ (2 − η ) / (2 /β − c > P p [0 ←→ v ] ≤ e − c | v | ∞ /ξ ( p ) for all p < p c and v ∈ Z d . We also recall the standard facts [23, Theorem 6.14] that ξ ( p ) iscontinuous, strictly increasing, and ξ ( p ) → ∞ as p ↑ p c .Let C > R sufficiently large, let p (cid:48) = p (cid:48) ( R ) ↑ p c be such that R = Cξ ( p (cid:48) ) log ξ ( p (cid:48) ). Since we have the a priori bound P p c [ A ( R )] ≥ cR − ( d − / [24, 48], we can take C > P p (cid:48) [ A ( R )] ≤ cR d − e − c C log ξ ( p (cid:48) ) ≤ cR d − R − c C + o (1) ≤ P p c [ A ( R )] / R . Then applying Proposition 2.1 to p = p (cid:48) and q = p c gives, for large R , P p c [ A ( R )] / ≤ P p c [ A ( R )] − P p (cid:48) [ A ( R )] ≤ c ( p c − p (cid:48) ) (cid:115) P p c [ A ( R )] (cid:88) v ∈ Λ R P p (cid:48) [0 ←→ v ]or, equivalently,(2.12) P p c [ A ( R )] ≤ c ( p c − p (cid:48) ) (cid:88) v ∈ Λ R P p (cid:48) [0 ←→ v ] . Since (cid:80) v ∈ Λ R P p (cid:48) [0 ←→ v ] ≤ χ ( p (cid:48) ), and by the definition of the exponents ν and γ , this implies P p c [ A ( R )] ≤ c ( p c − p (cid:48) ) χ ( p (cid:48) ) ≤ cξ ( p (cid:48) ) − /ν + o (1) ξ ( p (cid:48) ) γ/ν + o (1) = R − (2 − γ ) /ν + o (1) for large R , which implies η ≥ (2 − γ ) /ν .Finally, let δ > P p c [Cross ( R )] ≥ δ for large R (possible by the first statementof Lemma 2.4), and again let p (cid:48) = p (cid:48) ( R ) ↑ p c be such that R = Cξ ( p (cid:48) ) log ξ ( p (cid:48) ). Then P p (cid:48) [Cross ( R )] ≤ δ/ R , and we deduce that there exists p (cid:48)(cid:48) ∈ ( p (cid:48) , p c ) such that ddp P p [Cross ( R )] (cid:12)(cid:12)(cid:12) p = p (cid:48)(cid:48) ≥ δ/ p c − p (cid:48) . On the other hand, by Proposition 2.2 and monotonicity in p , ddp P p [Cross ( R )] (cid:12)(cid:12)(cid:12) p = p (cid:48)(cid:48) ≤ cR d/ (cid:113) P p c [ A ( R )]and hence ( p c − p (cid:48) ) R d P p c [ A ( R )] ≥ c for large R . By the definition of the exponent ν , this implies ξ ( p (cid:48) ) − /ν + o (1) R d P p c [ A ( R )] = R d − /ν + o (1) P p c [ A ( R )] ≥ c for large R , which implies that ¯ η ≤ d − /ν . The bound ¯ η ≤ − /ν in d = 2 is similar, exceptwe use two-arm events as in the proof of Theorem 1.1. (cid:3) Remark . As mentioned in Remark 1.13, by combining the high-dimensional bounds [26, 27] ξ ( p ) ≤ c ( p c − p ) − / and P p c [0 ←→ v ] ≤ c | v | − d +2 ∞ with (2.4) and (2.12), one arrives at a quantitative version of Corollary 1.12, namely the bounds c R − ≤ P p c [ A ( R )] ≤ c R − (log R ) . PPER BOUNDS ON THE ONE-ARM EXPONENT 11
Exploration algorithms.
To prove Propositions 2.1–2.3 we make use of exploration al-gorithms , which we introduce in a general setting.
Definition . Let X = ( X i ) be a countable set of random variablestaking values in arbitrary probability spaces. A (randomised) algorithm A on X is a randomadapted procedure that sequentially reveals a subset of the coordinates X i and returns a value.We say that A determines an event A if it returns the value A almost surely. The revealment Rev( i ) of a given coordinate X i is the probability that A reveals this coordinate.For Bernoulli percolation we consider algorithms on X = ( X e ) e ∈E for X e = e open . A usefulproperty of the events A ( R ) and Cross k ( R ) is the existence of determining algorithms whoserevealments are controlled by connection probabilities. Recall the box B k ( R ) ⊂ E , and defineits right half B + k ( R ) := [0 , R ] × [ − kR, kR ] d − ⊂ E . If d = 2, define also its top-right quarter B † k ( R ) := [0 , R ] × [0 , kR ] ⊂ E . Lemma 2.9.
For every p ∈ (0 , and R ≥ there is an algorithm determining A ( R ) suchthat, under P p , (cid:88) e ∈E Rev ( e ) ≤ (cid:88) v ∈ Λ R P p [0 ←→ v ] . Moreover for every k ≥ , p ∈ (0 , and R ≥ there are algorithms determining Cross k ( R ) such that, under P p , max e ∈ B + k ( R ) Rev ( e ) ≤ P p [ A ( R )] , and, if d = 2 , max e ∈ B † k ( R ) Rev ( e ) ≤ P p [ A ( R )] . We only give a sketch of proof; for more details see the proof of Lemma 3.6 which givesanalogous statements in the Gaussian setting.
Proof (sketch).
Recall the definition of { A E ←→ B } , and for each edge e ∈ E let { e E ←→ B } bethe union of { v E ←→ B } over the endpoints v of e , and { e E ⇐⇒ B } similarly.For the first statement, let W be the random subset of B ( R ) defined by W := (cid:110) e ∈ B ( R ) (cid:12)(cid:12)(cid:12)(cid:8) B ( R ) ←→ e (cid:9)(cid:111) . Then consider the algorithm that sequentially reveals W starting from the origin. This deter-mines A ( R ) and satisfies (cid:88) e ∈E Rev( e ) = (cid:88) e ∈ B ( R ) P p (cid:104)(cid:110) B ( R ) ←→ e (cid:111)(cid:105) ≤ (cid:88) v ∈ Λ R P p [ { ←→ v } ] . For the second statement define instead W := (cid:110) e ∈ B k ( R ) (cid:12)(cid:12)(cid:12)(cid:8) e B k ( R ) ←→ {− R } × [ − kR, kR ] d − (cid:9)(cid:111) . Then consider the algorithm that sequentially reveals W starting from the vertical hyperplane {− R } × [ − kR, kR ] d − . This determines Cross k ( R ) since any crossing of B k ( R ) intersects thehyperplane {− R } × [ − kR, kR ] d − , and the revealments for edges in B + k ( R ) are bounded bymax e ∈ B + k ( R ) P p (cid:104)(cid:110) e B k ( R ) ←→ {− R } × [ − kR, kR ] d − (cid:111)(cid:105) ≤ P p [ A ( R )] . For the third statement define instead W := (cid:110) e ∈ B k ( R ) (cid:12)(cid:12)(cid:12) (cid:8) e B k ( R ) ⇐⇒ (cid:0) {− R } × [ − kR, kR ] (cid:1) ∪ (cid:0) [ − R, R ] × {− kR } (cid:1)(cid:9)(cid:111) and consider the algorithm that sequentially reveals W starting from the union of the verticaland horizontal lines {− R } × [ − kR, kR ] and [ − R, R ] × {− kR } . This determines Cross k ( R ), since if we reveal all interfaces that intersect these vertical and horizontal lines then we also determineCross k ( R ). Moreover the revealments for edges in B † k ( R ) are bounded bymax e ∈ B † k ( R ) P p (cid:104)(cid:110) e B k ( R ) ⇐⇒ (cid:0) {− R } × [ − kR, kR ] (cid:1) ∪ (cid:0) [ − R, R ] × {− kR } (cid:1)(cid:111)(cid:105) ≤ P p [ A ( R )] . (cid:3) Proof of Propositions 2.1 and 2.2.
We prove a general bound valid for arbitrary events,which extends a result from [40] (see also [46, Appendix B] and [50] for similar arguments):
Proposition 2.10.
Let p, q ∈ (0 , , let A be an event depending on a finite number of edges,let A be an algorithm determining A , and let E (cid:48) ⊆ E be a subset of edges. Then (2.13) | P E (cid:48) p ; q [ A ] − P p [ A ] | ≤ max (cid:110) (cid:112) p (1 − p ) , (cid:112) q (1 − q ) (cid:111) | p − q | (cid:113) max { P p [ A ] , P E (cid:48) p ; q [ A ] } E p |W E (cid:48) | where P E (cid:48) p ; q denotes the modification of P p in which the parameter on E (cid:48) is set to q (remaining at p on other edges), and W E (cid:48) is the set of edges in E (cid:48) that are revealed by A . In particular, (2.14) (cid:12)(cid:12)(cid:12) (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] (cid:12)(cid:12)(cid:12) ≤ (cid:112) p (1 − p ) (cid:113) P p [ A ] E p |W E (cid:48) | , where ∂∂p e denotes the derivative with respect to the parameter on e . Our proof of Proposition 2.10 is different to previous approaches in the literature (see Re-mark 2.14), and relies on properties of the relative entropy. For P and Q probability measureson a common measurable space, the relative entropy (or Kullback-Leibler divergence) from P to Q is defined as D KL ( P || Q ) := (cid:90) log (cid:16) dPdQ (cid:17) dP if P is absolutely continuous with respect to Q , and D KL ( P || Q ) := ∞ otherwise; D KL ( P || Q ) isnon-negative by Jensen’s inequality. If X and Y are random variables taking values in a commonmeasurable space, with respective laws P and Q , we also write D KL ( X || Y ) for D KL ( P || Q ). Weshall need two basic properties of the relative entropy (see [32, Theorem 2.2 and Corollary 3.2]):(1) (Chain rule) Let X = ( X , X ) and Y = ( Y , Y ) be random variables taking values in acommon product measurable space. Then(2.15) D KL ( X || Y ) = D KL ( X || Y ) + E x ∼ X (cid:2) D KL (( X | X = x ) || ( Y | Y = x )) (cid:3) . (2) (Contraction) Let X and Y be random variables taking values in a common measurablespace and let F be a measurable map from that space. Then(2.16) D KL ( X || Y ) ≥ D KL ( F ( X ) || F ( Y )) . We first state a simple lemma on the relative entropy of stopped sequences of i.i.d. randomvariables. A stopping time for a real-valued sequence X = ( X i ) i ≥ is a positive integer τ = τ ( X )such that { τ ≥ n + 1 } is determined by ( X i ) i ≤ n . We define the corresponding stopped sequence X τ = ( X τi ) i ≥ as X τi = X i for i ≤ τ , and X τi = † for i > τ , where † is an arbitrary symbol. Lemma 2.11.
Let X = ( X i ) ni ≥ and Y = ( Y i ) ni ≥ be finite real-valued sequences of i.i.d. randomvariables with respective univariate laws µ and ν , let τ ≤ n be a stopping time, and let X τ and Y τ be the corresponding stopped sequences. Then D KL (cid:0) X τ (cid:13)(cid:13) Y τ (cid:1) = E [ τ ( X )] D KL ( µ (cid:107) ν ) . Proof.
Define X k ∧ τ = ( X τi ) i ≤ k and analogously for Y . By the chain rule (2.15), for 1 ≤ k ≤ n − D KL (cid:0) X ( k +1) ∧ τ (cid:13)(cid:13) Y ( k +1) ∧ τ (cid:1) = D KL (cid:0) X k ∧ τ (cid:13)(cid:13) Y k ∧ τ (cid:1) + E x ∼ ( X τi ) i ≤ k (cid:2) D KL (cid:0) X τk +1 (cid:12)(cid:12) ( X τi ) i ≤ k = x (cid:13)(cid:13) Y τk +1 (cid:12)(cid:12) ( Y τi ) i ≤ k = x (cid:1)(cid:3) = D KL (cid:0) X k ∧ τ (cid:13)(cid:13) Y k ∧ τ (cid:1) + E x ∼ ( X τi ) i ≤ k (cid:2) τ ( X ) ≥ k +1 D KL (cid:0) X k +1 (cid:12)(cid:12) ( X τi ) i ≤ k = x (cid:13)(cid:13) Y k +1 (cid:12)(cid:12) ( Y τi ) i ≤ k = x (cid:1)(cid:3) = D KL (cid:0) X k ∧ τ (cid:13)(cid:13) Y k ∧ τ (cid:1) + P [ τ ( X ) ≥ k + 1] D KL ( µ (cid:107) ν ) PPER BOUNDS ON THE ONE-ARM EXPONENT 13 where in the last step we used that τ is a stopping time. Hence, by induction, D KL (cid:0) X τ (cid:13)(cid:13) Y τ (cid:1) = (cid:88) ≤ k ≤ n − P [ τ ( X ) ≥ k + 1] D KL ( µ (cid:107) ν ) = E [ τ ( X )] D KL ( µ (cid:107) ν ) . (cid:3) We also need a variant of Pinsker’s inequality:
Lemma 2.12.
Let P and Q be probability measures on a common measurable space and let A be an event. Then | P ( A ) − Q ( A ) | ≤ (cid:112) { P ( A ) , Q ( A ) } D KL ( P (cid:107) Q ) . Proof.
We use a standard reduction to the binary case. Let Ber( x ) and Ber( y ) be Bernoullirandom variables with respective parameters x := P ( A ) and y := Q ( A ). By the contractionproperty (2.16) D KL ( P (cid:107) Q ) ≥ D KL (Ber( x ) (cid:107) Ber( y )), so it suffices to prove that(2.17) ( x − y ) ≤ { x, y } D KL (Ber( x ) (cid:107) Ber( y )) . If x ∈ { , } or y ∈ { , } then (2.17) is trivial since either the right-hand side is infinite (if x (cid:54) = y ) or both sides are zero (if x = y ). On the other hand, if x, y ∈ (0 ,
1) then D KL (Ber( x ) (cid:107) Ber( y )) := x log xy + (1 − x ) log 1 − x − y = (cid:90) xy x − ss (1 − s ) ds ≥ { x, y } (cid:90) xy ( x − s ) ds = 12 max { x, y } ( x − y ) where we used that sup s ∈ [ a,b ] s (1 − s ) ≤ max { a, b } for 0 ≤ a ≤ b ≤ (cid:3) Remark . In the proof we could replace max { x, y } with min { max { x, y } , / } , which recoversthe classical Pinsker’s inequality d T V ( P, Q ) := sup A | P ( A ) − Q ( A ) | ≤ (cid:112) D KL ( P (cid:107) Q ) / Proof of Proposition 2.10.
Recall that W E (cid:48) denotes the edges in E (cid:48) that are revealed by thealgorithm, and let W = ( W i ) i ≤|W E(cid:48) | denote the configuration on W E (cid:48) listed in the order ofrevealment. Moreover let W (cid:48) denote the configuration on edges in E \ E (cid:48)
First suppose that the algorithm A depends only on the configuration (i.e. there is no auxiliaryrandomness). Then the event A is measurable with respect to ( W, W (cid:48) ), and so by Lemma 2.12 | P E (cid:48) p ; q [ A ] − P p [ A ] | ≤ (cid:113) { P p [ A ] , P E (cid:48) p ; q [ A ] } D KL (( X, Z ) || ( Y, Z ))]where (
X, Z ) (resp. (
Y, Z )) is a random variable with the law of (
W, W (cid:48) ) under P p (resp. P E (cid:48) p ; q ).Moreover, conditionally on W (cid:48) , W has the law, under P p (resp. P E (cid:48) p ; q ), of a sequence of i.i.d.Bernoulli random variables with parameter p (resp. q ) stopped at the stopping time |W E (cid:48) | .Hence by the chain rule for the Kullback-Liebler divergence and Lemma 2.11, D KL (( X, Z ) || ( Y, Z )) = E (cid:2) D KL (( X |F Z ) || ( Y |F Z )) (cid:3) = E p |W E (cid:48) | D KL (Ber( p ) (cid:107) Ber( q ))where F Z denotes the σ -algebra generated by Z . Combining we have(2.18) | P E (cid:48) p ; q [ A ] − P p [ A ] | ≤ (cid:113) { P p [ A ] , P E (cid:48) p ; q [ A ] } E p |W E (cid:48) | D KL (Ber( p ) (cid:107) Ber( q )) . Finally since D KL (Ber( p ) (cid:107) Ber( q )) := p log pq + (1 − p ) log 1 − p − q = (cid:90) pq p − ss (1 − s ) ds ≤ max (cid:110) p (1 − p ) , q (1 − q ) (cid:111) (cid:90) pq ( p − s ) ds = max (cid:110) p (1 − p ) , q (1 − q ) (cid:111) ( p − q ) the proof is complete.The general case follows by averaging over any auxiliary randomness in the algorithm, sinceby Jensen’s inequality E [ (cid:112) E [ |W E (cid:48) |G ]] ≤ E [ (cid:112) |W E (cid:48) | ] for any sub- σ -algebra G . (cid:3) Remark . For comparison we sketch an alternative approach which is closer to that appearingin previous works (e.g. [40]); this leads to the bound(2.19) (cid:12)(cid:12)(cid:12) (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] (cid:12)(cid:12)(cid:12) ≤ p (1 − p ) (cid:113) P p [ A ] E p |W E (cid:48) | which is comparable to (2.14), although we believe it to be less general than the non-differentialstatement (2.13) (in particular, it does not seem straightforward to obtain (2.4) from (2.19)).Consider Russo’s formula (cid:12)(cid:12)(cid:12) (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] (cid:12)(cid:12)(cid:12) = 1 p (1 − p ) (cid:12)(cid:12)(cid:12) (cid:88) e ∈E (cid:48) Cov p ( A , e open ) (cid:12)(cid:12)(cid:12) (2.20)and decompose the sum as (cid:88) e ∈E (cid:48) Cov p (cid:0) A Rev( e ) , e open (cid:1) + (cid:88) e ∈E (cid:48) Cov p (cid:0) A Rev( e ) c , e open (cid:1) where Rev( e ) denotes the event that e is revealed by the algorithm. One can check that A Rev( e ) c is independent of e open and so the second sum vanishes. Hence (2.20) is at most1 p (1 − p ) (cid:12)(cid:12)(cid:12) (cid:88) e ∈E (cid:48) Cov p (cid:0) A Rev( e ) , e open (cid:1) (cid:12)(cid:12)(cid:12) ≤ p (1 − p ) (cid:115) P p [ A ] E p (cid:104)(cid:16) (cid:88) e ∈E (cid:48) Rev( e ) ( e open − p ) (cid:17) (cid:105) where we used the the Cauchy-Schwartz inequality. For edges e and f introduce the eventRev( e, f ) := Rev( e ) ∩ Rev( f ) ∩ { e is revealed before f } . Again one checks that, for e (cid:54) = f , Rev( e,f ) ( e open − p ) is independent of f open . Hence E p (cid:104)(cid:16) (cid:88) e ∈E (cid:48) Rev( e ) ( e open − p ) (cid:17) (cid:105) = (cid:88) e ∈E (cid:48) E p (cid:2) Rev( e ) ( e open − p ) (cid:3) ≤ (cid:88) e ∈E (cid:48) E p [ Rev( e ) ] = E p |W E (cid:48) | , since off-diagonal terms are zero by independence, and we used that ( e open − p ) ≤
1. Com-bining the above gives (2.19).We can now complete the proof of Propositions 2.1 and 2.2:
Proof of Proposition 2.1.
This follows directly from (2.13) (with E (cid:48) = E ) by considering thealgorithm in Lemma 2.9 that determines A ( R ) such that E p |W E (cid:48) | = (cid:88) e ∈E Rev( e ) ≤ (cid:88) v ∈ Λ R P p [0 ←→ v ] . (cid:3) Proof of Proposition 2.2.
For d ≥
2, recall the box B k ( R ) and its right half B + k ( R ) := [0 , R ] × [ − kR, kR ] d − . Consider the algorithm in Lemma 2.9 that determines Cross k ( R ) such that (cid:88) e ∈ B + k ( R ) Rev( e ) ≤ cR d max e ∈ B + k ( R ) Rev( e ) ≤ cR d P p [ A ( R )] . for c = c ( k ) >
0. By reflective symmetry in the vertical axis, ddp P p [Cross k ( R )] = (cid:88) e ∈ B k ( R ) ∂∂p e P p [Cross k ( R )] ≤ (cid:88) e ∈ B + k ( R ) ∂∂p e P p [Cross k ( R )]and hence, applying (2.14) (with E (cid:48) = B + k ( R )) ddp P p [Cross k ( R )] ≤ (cid:112) p (1 − p ) (cid:113) E p |W E (cid:48) | ≤ √ c R d/ (cid:112) p (1 − p ) (cid:113) P p [ A ( R )] . PPER BOUNDS ON THE ONE-ARM EXPONENT 15
For d = 2, recall the top-right quarter B † k ( R ) := [0 , R ] × [0 , kR ] of the box B k ( R ). Considerthe algorithm in Lemma 2.9 that determines Cross k ( R ) such that (cid:88) e ∈ B † k ( R ) Rev( e ) ≤ cR max e ∈ B † k ( R ) Rev( e ) ≤ cR P p [ A ( R )]for c = c ( k ) >
0. Again by reflective symmetry (this time in both axes) ddp P p [Cross k ( R )] = (cid:88) e ∈ B k ( R ) ∂∂p e P p [Cross k ( R )] ≤ (cid:88) e ∈ B † k ( R ) ∂∂p e P p [Cross k ( R )] , and the result follows from (2.14) (with E (cid:48) = B † k ( R )) as in the previous case. (cid:3) Proof of Proposition 2.3.
We begin by introducing the OSSS inequality. Let X =( X i ) ni =1 be a finite sequence of independent random variables taking values in arbitrary proba-bility spaces, and let A be an event. Then the resampling influence of X i on A is(2.21) Infl( i ) := P [ X ∈ A (cid:54) = X ( i ) ∈ A ]where X ( i ) denotes X with the coordinate X i resampled. Theorem 2.15 (OSSS inequality [39]) . For every algorithm A determining A ,Var ( A ) ≤ n (cid:88) i =1 Rev ( i ) Infl ( i ) where Rev ( i ) is the revealment of X i under A . Returning to the setting of Bernoulli percolation, combining the OSSS inequality with Russo’sformula yields the following:
Proposition 2.16.
Let p ∈ (0 , , let A be an increasing event depending on a finite number ofedges, let A be an algorithm determining A , and let E (cid:48) ⊆ E be a subset of edges. Then (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] ≥ p (1 − p ) Var p (cid:2) P p [ A | F E (cid:48) ] (cid:3) max e ∈E (cid:48) Rev ( e ) , where F E (cid:48) is the σ -algebra generated by the edges in E (cid:48) , and the revealements Rev ( e ) are under P p .Remark . If p = 1 /
2, the quantity Var p [ P p [ A |F E (cid:48) ]] has an interpretation as the square-sumof the Fourier coefficients of A supported on non-empty subsets of E (cid:48) (see, e.g., [21]). Proof.
Let X denote the vector of configurations on edges e / ∈ E (cid:48) , and ( X e ) e ∈E (cid:48) be the con-figuration on the remaining edges. Then by the OSSS inequality (Theorem 2.15) applied to X = ( X , ( X e ) e ∈E (cid:48) ), and bounding the revealment of X by 1,Var p ( A ) ≤ (cid:16) Infl(0) + (cid:88) e ∈E (cid:48) Rev( e )Infl( e ) (cid:17) where Infl(0) and Infl( e ) are defined as in (2.21) under P p . Next observe that12 Infl(0) = 12 E p (cid:2) P p [the outcome of A changes when the edges e / ∈ E (cid:48) are resampled | F E (cid:48) ] (cid:3) = E p (cid:2) P p [ A | F E (cid:48) ](1 − P p [ A | F E (cid:48) ]) (cid:3) = E p (cid:2) Var p [ A | F E (cid:48) ] (cid:3) , and hence, by the law of total variance,Var p ( A ) − Infl(0) / p ( A ) − E p (cid:2) Var p [ A | F E (cid:48) ] (cid:3) = Var p (cid:2) P p [ A | F E (cid:48) ] (cid:3) . This yields the following extension of the OSSS inequality(2.22) Var p (cid:2) P p [ A | F E (cid:48) ] (cid:3) ≤ (cid:88) e ∈E (cid:48) Rev( e )Infl( e ) ≤ max e ∈E (cid:48) Rev( e )2 (cid:88) e ∈E (cid:48) Infl( e ) . We deduce the result by combining with Russo’s formula for increasing events, namely (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] = 2 p (1 − p ) (cid:88) e ∈E (cid:48) Infl( e )(which coincides with (2.20) since Cov p ( A , e open ) = 2Infl( e ) for increasing A ). (cid:3) Proof of Proposition 2.3.
Recall the top-right quarter B † k ( R ) and set E (cid:48) = B † k ( R ). We claim(2.23) Var p (cid:2) P p [Cross k ( R ) | F E (cid:48) ]] ≥ P p [Cross / (8 k ) ( kR )] (1 − P p [Cross k ( R/ . Assuming (2.23), the statement follows by applying Proposition 2.16 to the algorithm in Lemma 2.9that determines Cross k ( R ) whose revealments on B † k ( R ) are bounded by 2 P p [ A ( R )].To prove (2.23), remark first that, for any event A and sub- σ -algebra G ,Var[ P [ A |G ]] = E [( P [ A |G ] − P [ A ]) ] ≥ sup A (cid:48) ∈G E [( P [ A |G ] − P [ A ]) A (cid:48) ](2.24) ≥ sup A (cid:48) ∈G P [ A (cid:48) ]( P [ A | A (cid:48) ] − P [ A ]) where the second inequality is Jensen’s. Hence it is enough to construct an event A (cid:48) , measur-able with respect to the configuration on the top-right quarter, such that Cross k ( R ) becomessubstantially more likely if A (cid:48) occurs (see Figure 1 for an illustration).Define A (cid:48) := (cid:110) { R/ } × [ R/ , R/ [ R/ ,R ] × [ R/ ,R/ ←→ { R } × [ R/ , R/ (cid:111) ∩ (cid:110) [ R/ , R/ × { R/ } [ R/ ,R/ × [ R/ ,kR ] ←→ [ R/ , R/ × { kR } (cid:111) , which is measurable with respect to F E (cid:48) . By the FKG inequality and symmetry (and an obviousevent inclusion), P p [ A (cid:48) ] ≥ P p [Cross / (8 k ) ( kR )] . Define also the events B := (cid:110) {− R } × [ R/ , R/ [ − R,R/ × [ R/ , R/ ←→ { R/ } × [ R/ , R/ (cid:111) B := (cid:110) { R/ } × [ − kR, kR ] [3 R/ ,R ] × [ − kR,kR ] ←→ { R } × [ − kR, kR ] (cid:111) which are defined on disjoint domains and are translated copies of, respectively, Cross / (3 R/ k ( R/ C := (cid:110) {− R } × [ − kR, kR ] B k ( R ) ←→ (cid:0) { R/ } × [ R/ , kR ] (cid:1) ∪ (cid:0) [ R/ , R ] × { R/ } (cid:1) ∪ (cid:0) { R } × [ − kR, R/ (cid:1)(cid:111) and observe (i) Cross k ( R ) ⊆ C , (ii) on A (cid:48) , Cross k ( R ) = C , and (iii) B ∩ B c ⊆ C \ Cross k ( R ).Hence P p [Cross k ( R ) | A (cid:48) ] − P p [Cross k ( R )] = P p [ C | A (cid:48) ] − P p [Cross k ( R )] ≥ P p [ C ] − P p [Cross k ( R )]= P p [ C \ Cross k ( R )] ≥ P p [ B ∩ B c ] = P p [ B ](1 − P p [ B ]) ≥ P p [Cross / (8 k ) ( kR )] (cid:0) − P p [Cross k ( R/ (cid:1) , where the second step is by the FKG inequality, the penultimate step uses disjoint domains, andthe final step is an obvious event inclusion. Applying (2.24) (with A = Cross k ( R ) and G = F E (cid:48) )gives (2.23). (cid:3) A general bound for revealments of increasing events.
Combining Propositions 2.10and 2.16 yields a general lower bound on the revealments of increasing events:
Proposition 2.18.
In the setting of Proposition 2.16 (in particular the event A is increasing), max e ∈E (cid:48) Rev ( e ) ≥ (cid:0) Var p (cid:2) P p [ A | F E (cid:48) ] (cid:3)(cid:1) / ( p (1 − p ) P p [ A ] |E (cid:48) | ) / . PPER BOUNDS ON THE ONE-ARM EXPONENT 17
Figure 1.
An illustration of the proof of (2.23). The first panel shows theevent A (cid:48) . The second illustrates how, on A (cid:48) , the event C is equivalent toCross k ( R ). The third shows how the crossing given by B , combined with thedual crossing given by B c , realises C but not Cross k ( R ). Proof.
By (2.14) and Proposition 2.16 we have4 p (1 − p ) Var p (cid:2) P p [ A | F E (cid:48) ] (cid:3) max e ∈E (cid:48) Rev( e ) ≤ (cid:88) e ∈E (cid:48) ∂∂p e P p [ A ] ≤ (cid:112) p (1 − p ) (cid:113) P p [ A ] E p |W E (cid:48) | ≤ (cid:112) p (1 − p ) (cid:113) P p [ A ] |E (cid:48) | max e ∈E (cid:48) Rev( e )and rearranging gives the result. (cid:3) Proposition 2.18 generalises a result from [8] which considered the case p = 1 / E (cid:48) is theset of edges on which A depends; denoting by n the cardinality of this set of edges, this gives(2.25) max e Rev( e ) ≥ (cid:0) / [ A ] (cid:1) / ( P / [ A ] n ) / which is comparable to [8, Theorem 2 (part 2)], although (2.25) has a stronger constant.3. Level set percolation of Gaussian fields
We now establish our main results in the case of Gaussian percolation; the proof will closelyfollow the approach for Bernoulli percolation in Section 2. For k, R >
0, recall the box B k ( R ) :=[ − R, R ] × [ − kR, kR ] d − , which we now view as a subset of R d . Then defineCross k ( R ) := (cid:110) {− R } × [ − kR, kR ] d − B k ( R ) ←→ { R } × [ kR, kR ] d − (cid:111) where { A E ←→ B } := { there exists a path in { f ≥ } ∩ E that intersects A and B } . For 0 ≤ r ≤ R define A ( r, R ) := { Λ r ←→ ∂ Λ R } and A ( r, R ) := { Λ r ⇐⇒ ∂ Λ R } where { A E ⇐⇒ B } = { A E ←→ B } ∩ { there exists a path in { f ≤ } ∩ E that intersects A and B } , and { A ⇐⇒ B } = { A R d ⇐⇒ B } . By continuity of f , if d = 2 then A ( r, R ) could equivalently bedefined as A ( r, R ) = { there exists a path in { f = 0 } that intersects Λ r and ∂ Λ R } . We make the elementary observation that P (cid:96) c [ A ( r, R )] ≤ P (cid:96) c [ A ( r, R )], and moreover if d = 2and under (POS’) (so the FKG inequality is available; c.f. (2.1)) then(3.1) P (cid:96) c [ A ( r, R )] ≤ P (cid:96) c [ A ( r, R )] . We now state the analogues of Propositions 2.1–2.3, which concern Gaussian fields f = q (cid:63) W with finite-range dependence . Recall the Dini derivatives , defined for f : R → R as d + dx f ( x ) = lim sup ε ↓ f ( x + ε ) − f ( x ) ε and d − dx f ( x ) = lim inf ε ↓ f ( x + ε ) − f ( x ) ε . Proposition 3.1.
Suppose f = q (cid:63) W satisfies Assumption 1.4 and (POS) – (BOU) , and let r > be such that q is supported on Λ r . Then for (cid:96) ≤ (cid:96) (cid:48) and R ≥ r ≥ , P (cid:96) (cid:48) [ A (1 , R )] − P (cid:96) [ A (1 , R )] ≤ r d/ ( (cid:96) (cid:48) − (cid:96) ) (cid:82) q (cid:115) P (cid:96) (cid:48) [ A (1 , R )] (cid:88) v ∈ r Z d ∩ Λ R +2 r P (cid:96) [Λ ←→ v + Λ r ] . Proposition 3.2.
Suppose f = q (cid:63) W satisfies Assumption 1.4 and (POS) – (BOU) , and let r > be such that q is supported on Λ r . Then for k ≥ there exists c = c ( k ) > such that, for (cid:96) ∈ R and R ≥ r > , d + d(cid:96) P (cid:96) [ Cross k ( R )] ≤ cR d/ (cid:82) q (cid:40)(cid:112) P (cid:96) [ A (2 r, R − r )] d = 2 , (cid:112) P (cid:96) [ A (2 r, R − r )] d ≥ . Proposition 3.3.
Suppose f = q (cid:63) W satisfies Assumption 1.4 and (POS) – (BOU) , and let r > be such that q is supported on Λ r . Then for k ≥ there exists c = c ( k ) > such that, for (cid:96) ∈ R and R ≥ r > , (3.2) d − d(cid:96) P (cid:96) [ Cross k ( R )] ≥ c (cid:107) q (cid:107) P (cid:96) [ Cross k ( R )] (cid:0) − P (cid:96) [ Cross k ( R )] (cid:1) rR (cid:80) R/ri =2 P (cid:96) [ A (2 r, ir )] , and, if d = 2 and (POS’) holds, (3.3) d − d(cid:96) P (cid:96) [ Cross k ( R )] ≥ c (cid:107) q (cid:107) P (cid:96) [ Cross / (8 k ) ( kR )] (cid:0) − P (cid:96) [ Cross k ( R/ (cid:1) P (cid:96) [ A (2 r, R − r )] . We prove Propositions 3.1–3.3 later in the section; for now we establish our main resultTheorems 1.5. For this we need two auxiliary results; these are rather standard, but we givedetails on their proof at the end of the section. The first is the analogue of Lemma 2.4:
Lemma 3.4.
Suppose f = q (cid:63) W satisfies Assumption 1.4 with parameter β > d .(1) There exists δ > and (cid:96) (cid:48) = (cid:96) (cid:48) ( R ) ≤ (cid:96) c such that, for R ≥ , P (cid:96) (cid:48) [ Cross ( R )] = δ. (2) (RSW) Let d = 2 and k > and suppose that (POS’) holds. Then there exists δ > such that, for R ≥ , P (cid:96) c [ Cross k ( R )] ∈ ( δ, − δ ) . The second allows us to compare a Gaussian field with an approximation that satisfies (BOU).Fix a smooth symmetric cutoff function ϕ : R → [0 ,
1] such that ϕ ( x ) = 1 for (cid:107) x (cid:107) ∞ ≤ / ϕ ( x ) = 0 for (cid:107) x (cid:107) ∞ ≥
1. For r > f r := q r (cid:63) W where q r ( x ) := q ( x ) ϕ ( | x | /r ). Note that q r is supported on Λ r , and also, since q ∈ L ( R d ) ∩ L ( R d ), as r → ∞ ,(3.5) (cid:107) q r (cid:107) → (cid:107) q (cid:107) and (cid:90) q r → (cid:90) q. Remark that if either Assumption 1.4 or (POS) holds for f then it holds for f r (on the otherhand, deducing this for (POS’) seems difficult but we do not need it). In particular, as discussedin Section 1.1, if d = 2 and Assumption 1.4 holds for f then (cid:96) c ( f ) = (cid:96) c ( f r ) = 0. PPER BOUNDS ON THE ONE-ARM EXPONENT 19
The following lemma, essentially taken from [38], allows us to compare f and f r : Lemma 3.5.
Suppose f = q (cid:63) W satisfies Assumption 1.4 with parameter β > d and (POS) .Then there exist c , c > such that, for r, R ≥ , increasing event A measurable with respect to f | B ( R ) , and (cid:96) ∈ R , | P (cid:96) [ f ∈ A ] − P (cid:96) [ f r ∈ A ] | ≤ c (cid:0) R d/ (log R ) r − ( β − d/ + e − c (log R ) (cid:1) . The same conclusion holds if A is the intersection of one increasing and one decreasing eventwhich are both measurable with respect to f | B ( R ) . We are now ready to prove Theorem 1.5:
Proof of Theorem 1.5.
In the proof c > f (and the choiceof the cutoff function ϕ in (3.4)) and may change from line to line. The bound η ≤ d −
1, andalso η ≤ / d = 2 and (POS’) holds, are rather classical; in fact they are true for any β > d .For the former, combining P (cid:96) c [Cross ( R )] ≥ δ (the first statement of Lemma 3.4) with the unionbound applied along the hyperplane { } × [ − kR, kR ] gives P (cid:96) c [ A (1 , R )] ≥ cR − ( d − . For thelatter, by combining the RSW estimates (the second statement of Lemma 3.4) with Lemma 3.5one can deduce (see [4, 38] for similar arguments) P (cid:96) c (cid:2) {− R } × [ − R, R ] B ( R ) ⇐⇒ { R } × [ − R, R ] (cid:3) ≥ c (cid:0) − R − ( β − (log R ) (cid:1) ≥ c/ R . By the union bound applied along { }× [ − R, R ] this implies P (cid:96) c [ A (1 , R )] ≥ cR − , and given (3.1) we see that P (cid:96) c [ A (1 , R )] ≥ cR − / .We now prove the remaining bounds, beginning with the first statement. Fix 1 > α > d/ β − d/ and η > η ∗ > η = 0 there is nothing to prove). Then by monotonicity in (cid:96) , the unionbound, and the definition of η , P (cid:96) [ A ( r, R )] ≤ P (cid:96) c [ A ( r, R )] ≤ cr d − P (cid:96) c [ A (1 , R − r )] ≤ r d − R − η ∗ for all (cid:96) ≤ (cid:96) c , R sufficiently large, and r ∈ [1 , R/ r = R α . Then by an integral comparison, rR R/r (cid:88) i =2 P (cid:96) [ A (2 r, ir )] ≤ cr − η ∗ +( d − × rR R/r (cid:88) i =2 i − η ∗ ≤ cr − η ∗ +( d − ( R/r ) − min { η ∗ , } (log( R/r )) ≤ c (log R ) (cid:0) R α ( d − − η ∗ ) − (1 − α ) min { η ∗ , } (cid:1) for (cid:96) ≤ (cid:96) c and large R . Consider the field f r defined in (3.4). By Lemma 3.5, P (cid:96) [ f r ∈ A ( r (cid:48) , R )] ≤ P (cid:96) [ A ( r (cid:48) , R )] + cR d/ − α ( β − d/ (log R ) + ce − c (log R ) for (cid:96) ≤ (cid:96) c and 2 ≤ r (cid:48) ≤ R , and hence rR R/r (cid:88) i =2 P (cid:96) [ f r ∈ A (2 r, ir )] ≤ c (log R ) (cid:0) R α ( d − − η ∗ ) − (1 − α ) min { η ∗ , } + R d/ − α ( β − d/ (cid:1) for (cid:96) ≤ (cid:96) c and large R . Moreover, by Lemma 3.4 there are δ > (cid:96) (cid:48) = (cid:96) (cid:48) ( R ) ≤ (cid:96) c such that P (cid:96) (cid:48) [Cross ( R )] = δ . Hence, again by Lemma 3.5, P (cid:96) (cid:48) [ f r ∈ Cross ( R )] (cid:0) − P (cid:96) (cid:48) [ f r ∈ Cross ( R )] (cid:1) ≥ δ (1 − δ ) − cR d/ − α ( β − d/ (log R ) ≥ δ (1 − δ ) / R , where we used that α > d/ β − d/ .We now apply Propositions 3.2 and 3.3 to the field f r at the sequence of levels (cid:96) (cid:48) ( R ) ≤ (cid:96) c .First, by (3.2) (recalling (3.5)) dd(cid:96) P (cid:96) [ f r ∈ Cross ( R )] (cid:12)(cid:12)(cid:12) (cid:96) = (cid:96) (cid:48) ≥ cδ (1 − δ ) (cid:107) q r (cid:107) (cid:16) rR R/r (cid:88) i =2 P (cid:96) (cid:48) [ f r ∈ A (2 r, ir )] (cid:17) − (3.6) ≥ c (log R ) − (cid:0) R α ( d − − η ∗ ) − (1 − α ) min { η ∗ , } + R d/ − α ( β − d/ (cid:1) −
10 UPPER BOUNDS ON THE ONE-ARM EXPONENT for large R . Similarly, by Proposition 3.1, dd(cid:96) P (cid:96) [ f r ∈ Cross ( R )] (cid:12)(cid:12)(cid:12) (cid:96) = (cid:96) (cid:48) ≤ cR d/ (cid:16) P l (cid:48) [ f r ∈ A (2 r, R )] (cid:17) / (3.7) ≤ cR d/ (cid:0) R − η ∗ + α ( d − + R d/ − α ( β − d/ (log R ) (cid:1) / ≤ c (cid:112) log R (cid:0) R d/ − η ∗ / α ( d − / + R d/ − α ( β − d/ / (cid:1) for large R , where we used that √ a + b ≤ √ a + √ b for a, b >
0. Comparing (3.6) and (3.7) andexpanding the brackets we deduce that at least one of the exponents E := (cid:0) d/ − α ( β − d/ / (cid:1) + (cid:0) d/ − α ( β − d/ (cid:1) E := (cid:0) d/ − η ∗ / α ( d − / (cid:1) + (cid:0) d/ − α ( β − d/ (cid:1) E := (cid:0) d/ − η ∗ / α ( d − / (cid:1) + (cid:0) α ( d − − η ∗ ) − (1 − α ) min { η ∗ , } (cid:1) E := (cid:0) d/ − α ( β − d/ / (cid:1) + (cid:0) α ( d − − η ∗ ) − (1 − α ) min { η ∗ , } (cid:1) must be non-negative. The first is equivalent to α ≤ d β − d/ . The second implies that η ∗ ≤ d + α ( d − α > d β − d/ . The third is equivalent to either η ∗ ≤ d + α ( d −
1) (if η ∗ ≤
1) or η ∗ ≤ d − α (3 d − α (if η ∗ > η ∗ ≤ d + α ( d −
1) (if η ∗ ≤
1, assuming that α > d β − d/ ) or α ≤ d − β − d +4 (if η ∗ > d ≥ d β − d/ < d − β − d +4 and d + α ( d − < d − α (3 d − α . Hence we conclude that if α > d − β − d +4 then η ∗ ≤ d − α (3 d − α . Sending α → d − β − d +4 from above gives the result.The proof of the remaining statements are similar, and closer to the arguments in Section 2.For the second statement, fix 1 > α > d/ − β − d/ and η > η ∗ >
0. As in the proof of the firststatement,(3.8) P (cid:96) c [ A (2 r, R )] ≤ r d − R − η ∗ for large R and r ∈ [1 , R/ r = R α . Since we have the a priori bound P (cid:96) c [ A (1 , R )] ≥ cR − ( d − (from the start of the proof), by Lemma 3.5 | P (cid:96) c [ f r ∈ A ( r (cid:48) , R (cid:48) )] − P (cid:96) c [ A ( r (cid:48) , R (cid:48) )] | ≤ cR d/ − α ( β − d/ (log R ) + ce − c (log R ) (3.9) ≤ P (cid:96) c [ A ( r (cid:48) , R (cid:48) )] / R and 1 ≤ r (cid:48) ≤ R (cid:48) ≤ R , where we used that d/ − α ( β − d/ ≤ − ( d −
1) by thedefinition of α . Observe next that, for | x | ≥ r , the event Λ ←→ x +Λ r implies the occurrenceof the events { A (1 , | x | ∞ / } and { x + A (6 r, | x | ∞ / } , which are measurable with respect to disjoint domains separated by distance r . Since f r is r -dependent, for large R and 18 r ≤ | x | ≤ R + 2 r this implies P (cid:96) c [ f r ∈ Λ r ←→ x + Λ r ] ≤ P (cid:96) c [ f r ∈ A (1 , | x | ∞ / P (cid:96) c [ f r ∈ A (6 r, | x | ∞ / ≤ P (cid:96) c [ A (1 , | x | ∞ / P (cid:96) c [ A (6 r, | x | ∞ / ≤ cr d − | x | − η ∗ ∞ where we used (3.8) and then (3.9). Then by an integral comparison, for large R , (cid:88) v ∈ r Z d ∩ Λ R +2 r P (cid:96) [ f r ∈ Λ r ←→ v + Λ r ] ≤ c + cr d − (cid:88) v ∈ r Z d ∩ Λ R +2 r \{ } | v | − η ∗ ∞ ≤ c + cr d − max { r − η ∗ ( R/r ) d − η ∗ (log( R/r )) , }≤ cR max { α ( d − , − α + d − η ∗ } (log R ) . Next define, for large R , (cid:96) (cid:48) ( R ) = inf { (cid:96) > (cid:96) c : P (cid:96) [ A (1 , R )] = 2 P (cid:96) c [ A (1 , R )] } , PPER BOUNDS ON THE ONE-ARM EXPONENT 21 which exists by continuity in (cid:96) (see Lemma 3.13), and since P (cid:96) c [ A (1 , R )] > P (cid:96) [ A (1 , R )] ≥ P (cid:2) sup x ∈ Λ R f ( x ) ≤ (cid:96) (cid:3) → (cid:96) → ∞ . By the mean-field lower bound (1.1), for large R , P (cid:96) c [ A (1 , R )] / P (cid:96) (cid:48) [ A (1 , R )] / ≥ θ ( (cid:96) (cid:48) ) / ≥ c ( (cid:96) (cid:48) − (cid:96) c ) , (3.10)where we used that (cid:96) (cid:48) ( R ) → (cid:96) c as R → ∞ , since otherwiselim sup R →∞ P (cid:96) c [ A (1 , R )] ≥ lim sup R →∞ θ ( (cid:96) (cid:48) ( R )) / > | P (cid:96) (cid:48) [ f r ∈ A (1 , R )] − P (cid:96) (cid:48) [ A (1 , R )] | ≤ cR d/ − α ( β − d/ (log R ) + ce − c (log R ) ≤ P (cid:96) c [ A (1 , R )] = P (cid:96) (cid:48) [ A (1 , R )] / . Then applying Proposition 3.1 to the field f r , for large R , P (cid:96) c [ A (1 , R )] = P (cid:96) (cid:48) [ A (1 , R )] − P (cid:96) c [ A (1 , R )] ≤ P (cid:96) (cid:48) [ f r ∈ A (1 , R )] − P (cid:96) c [ f r ∈ A (1 , R )]) ≤ r d/ ( (cid:96) (cid:48) − (cid:96) c ) (cid:82) q (cid:115) P (cid:96) (cid:48) [ A (1 , R )] (cid:88) v ∈ r Z d ∩ Λ R +2 r P (cid:96) c [Λ r ←→ v + Λ r ] ≤ (cid:96) (cid:48) − (cid:96) c ) (cid:82) q R αd/ (cid:113) R − η ∗ R max { ,α ( d − , − α + d − η ∗ } (log R ) . Comparing with (3.10) implies that αd − η ∗ + max { α ( d − , − α + d − η ∗ } ≥
0, and so η ∗ ≤ max { d/ α ( d − / , α (2 d − } , and sending α → d/ − β − d/ from above gives the result.Finally, consider the third statement. Fix 1 > α > β − and r = R α . By the RSW estimates(the second statement of Lemma 3.4) and Lemma 3.5, P (cid:96) c [ f r ∈ Cross ( R )] (cid:0) − P (cid:96) c [ f r ∈ Cross ( R )] (cid:1) ≤ c − cR − α ( β − (log R ) < c/ R . Then by (3.3) and Proposition 3.2 we have, for large R , c P (cid:96) c [ f r ∈ A (2 r, R − r )] − ≤ dd(cid:96) P (cid:96) [ f r ∈ Cross ( R )] (cid:12)(cid:12)(cid:12) (cid:96) = (cid:96) c ≤ cR (cid:112) P (cid:96) c [ f r ∈ A (2 r, R − r )]which gives P (cid:96) c [ f r ∈ A (2 r, R − r )] ≥ cR − / for large R . Applying the union bound andLemma 3.5 (valid since A (3 √ r, R ) is the intersection of an increasing and a decreasing event)yields P (cid:96) c [ A (1 , R − r )] ≥ cr − P (cid:96) c [ A (2 r, R − r )] ≥ cR − α (cid:0) R − / − R − α ( β − (log R ) (cid:1) . Sending α → β − from above gives that, for every ε > P (cid:96) c [ A (1 , R )] ≥ c R − / − / (3( β − − ε . for c = c ( ε ) > R . Hence by the FKG inequality (see (3.1))(3.12) P (cid:96) c [ A (1 , R )] ≥ ( P (cid:96) c [ A (1 , R )]) / ≥ c R − / − / (6( β − − ε/ for c = c ( ε ) > R , which gives the result. (cid:3) Randomised algorithms.
Recall from Definition 2.8 that (randomised) algorithms areadapted procedures that sequentially reveal a subset of random variables X = ( X i ) and returna value. In the Bernoulli case we took X e = e open indexed by the edges of Z d . In the Gaussiansetting we will instead decompose the field f = (cid:80) f S into independent components indexed bya partition of R d into disjoint boxes S , and take X S = f S .Fix a constant s > R d into boxes S ∈ S s which are translations of [0 , s ) d bythe lattice s Z d . Then one can decompose f = (cid:80) S ∈S s f S where f S ( · ) = ( q (cid:63) W | S )( · ) = (cid:90) y ∈ R d q ( · − y ) dW | S ( y ) = (cid:90) y ∈ S q ( · − y ) dW ( y ) are independent centred almost surely continuous Gaussian fields, and W | S = W S is therestriction of the white noise W to S . We then introduce the collection A s of algorithms thatadaptively reveal a subset of ( f S ) S ∈S s . For brevity we say that a box S ∈ S s is revealed if f S (orequivalently W | S ) is revealed. As in Definition 2.8, Rev( S ) is the probability that S is revealed.In the case that f satisfies (BOU), we make the important distinction between the set of boxesthat are revealed by an algorithm, and the set V ⊂ R d on which the field f is determined by analgorithm. More precisely, for V ⊂ R d and a set of boxes P ⊂ S s , we say that f is determinedon V using P if f | V = ( (cid:80) S ∈P f S ) | V , or equivalently, if (( (cid:83) S ∈S s \P Supp( q (cid:63) S )) ∩ V = ∅ .We now state the analogue of Lemma 2.9. Recall the box B k ( R ) = [ − R, R ] × [ − kR, kR ] d − ,its right half B + k ( R ) = [0 , R ] × [ − kR, kR ] d − , and in the case d = 2, its top-right quarter B † k ( R ) = [0 , R ] × [0 , kR ], all considered as subsets of R d . Lemma 3.6.
Suppose f = q (cid:63)W satisfies Assumption 1.4 and (BOU) , and let r > be such that q is supported on Λ r . Then for every (cid:96) ∈ R and R ≥ r there is an algorithm in A r determining A (1 , R ) such that, under P (cid:96) , (cid:88) S ∈S r Rev ( S ) ≤ (cid:88) v ∈ r Z d ∩ Λ R +2 r P (cid:96) [Λ ←→ v + Λ r ] . Moreover for every k ≥ , (cid:96) ∈ R , and R ≥ r > , there are algorithms in A r determiningCross k ( R ) such that, under P (cid:96) , these algorithms satisfy respectively max S ∈S r Rev ( S ) ≤ rR R/r (cid:88) i =2 P (cid:96) [ A (2 r, ir )] , max S ∈S r : d ( S,B + k ( R )) We begin by introducing some notation. Distinct boxes S, S (cid:48) ∈ S r are adjacent if theirclosures have non-empty intersection. For a set of boxes P ⊂ S r define its outer boundary ∂ + P := { S ∈ S r \ P : S is adjacent to a box S (cid:48) ∈ P } , so in particular ∂ + { S } are the boxes adjacent to S . Define also the interior int( P ) := { S ∈P : ∂ + { S } ⊆ P} . Note that, since q is supported on Λ r , f is determined on int( P ) using P . A primal (resp. dual) path will designate a path in { f ≥ } (resp. { f ≤ } ) and a level line willdesignate a path in { f = 0 } ; these paths are contained in a set of boxes P ⊂ S r if they arecontained in ∪ S ∈P S . The left and right sides of B k ( R ) are respectively {− R } × [ − kR, kR ] d − and { R } × [ − kR, kR ] d − , and if d = 2 the top and bottom sides are defined similarly.For the first statement consider the following algorithm: • Reveal every box that intersects Λ as well as all adjacent boxes. • Iterate the following steps: – Let W ⊂ S r be the boxes that have been revealed. – Identify the set U ⊆ ∂ + (int( W )) such that, for each S ∈ U , there is a primal pathcontained in int( W ) ∩ Λ R between Λ and the boundary of S (measurable since f isdetermined on int( W )). In other words, U contains all boxes on which f is not yetdetermined but which are connected to Λ by a primal path in Λ R that has beendetermined. – If U is empty end the loop. Otherwise reveal the boxes in ∂ + U \ W . For fixed s > More precisely ( W | S ) S ∈S s are defined by setting, for g ∈ L ( R d ), (cid:82) y ∈ R d g ( y ) dW | S ( y ) to be jointly centredGaussian random variables with covariance E (cid:104) (cid:90) y ∈ R d g ( y ) dW | S ( y ) (cid:90) y ∈ R d g ( y ) dW | S ( y ) (cid:105) = (cid:40)(cid:82) y ∈ S g ( y ) g ( y ) dy if S = S , . PPER BOUNDS ON THE ONE-ARM EXPONENT 23 • If int( W ) ∩ Λ R contains a primal path between Λ and ∂ Λ R output 1, otherwise output 0.This algorithm determines A (1 , R ) since int( W ) eventually contains all the components of { f ≥ } ∩ Λ R that intersect Λ . To estimate the sum of revealments Rev( S ), a box S is revealed ifand only if either (i) it is adjacent to a box that intersects Λ , or (ii) there is a primal path inΛ R between Λ and a box adjacent to S . If S = v + [0 , r ) and Λ ∩ ( v + Λ r ) = ∅ then the latterimplies the occurrence of Λ ←→ v + Λ r . Summing over S gives (cid:88) S ∈S r Rev( S ) ≤ (cid:88) v ∈ r Z d ∩ Λ R +2 r P p [Λ ←→ v + Λ r ] . For the second statement, the first algorithm is: • Draw a random integer i uniformly in [ − R/r, L = { ir } × [ − kR, kR ] d − , andreveal every box that intersects L ∩ B k ( R ), as well as all adjacent boxes. • Iterate the following steps: – Let W ⊂ S r be the boxes that have been revealed. – Identify the set U ⊆ ∂ + (int( W )) such that, for each S ∈ U , there is a primal pathcontained in int( W ) ∩ B k ( R ) between L ∩ B k ( R ) and the boundary of S . – If U is empty end the loop. Otherwise reveal the boxes in ∂ + U \ W . • If int( W ) ∩ B k ( R ) contains a primal path between the left and right sides of B k ( R )output 1, otherwise output 0.This algorithm determines Cross k ( R ) since int( W ) eventually contains all the components of { f ≥ } ∩ B k ( R ) that intersect L ∩ B k ( R ), and any primal path in B k ( R ) between its left andright sides must intersect L ∩ B k ( R ). To estimate the revealments Rev( S ) of this algorithm, abox S is revealed if and only if either (i) it is adjacent to a box that intersects L ∩ B k ( R ), or (ii)there is a primal path in B k ( R ) between L and a box adjacent to S . If d (cid:48) denotes the distancefrom the centre of S to L , this implies the occurrence of (a translation of) the event A (2 r, d (cid:48) ).Averaging over i ∈ [ − R/r, 0] givesRev( S ) ≤ rR (cid:16) R/r (cid:88) i =3 P (cid:96) [ A (2 r, ir )] (cid:17) ≤ rR R/r (cid:88) i =2 P (cid:96) [ A (2 r, ir )] . For the second algorithm we modify the above by setting L as {− R } × [ − kR, kR ] d − , andrepeating all other steps. A box S ∈ S r such that d ( S, B + k ( R )) < r is only revealed if there is aprimal path in B k ( R ) between L and a box adjacent to S , which as before implies the occurrenceof (a translation of) the event A (2 r, d ), where d is the distance from the centre of S to L . Since d is at least R − r , Rev( S ) ≤ P (cid:96) [ A (2 r, R − r )] as required.The final algorithm (specific to d = 2) is: • Define L = [ − R, R ] × {− kR } and L = {− R } × [ − kR, kR ], and reveal every box thatintersects ( L ∪ L ) ∩ B k ( R ), as well as all adjacent boxes. • Iterate the following steps: – Let W ⊂ S r be the boxes that have been revealed. – Identify the set U ⊆ ∂ + (int( W )) such that, for each S ∈ U , there is a level linecontained in int( W ) ∩ B k ( R ) between ( L ∪ L ) ∩ B k ( R ) and the boundary of S . – If U is empty end the loop. Otherwise reveal the boxes in ∂ + U \ W . • If int( W ) ∩ B k ( R ) contains a primal (resp. dual) path between the left and right (resp.top and bottom) sides of B k ( R ) terminate with output 1 (resp. 0). • Since int( W ) contains all components of { f = 0 } ∩ B k ( R ) that intersect L ∩ L and thealgorithm has not yet terminated, exactly one of { f ≥ } ∩ B k ( R ) or { f ≤ } ∩ B k ( R )has a component that intersects all four sides of B k ( R ). Partition B k ( R ) into regions( P i ) using the components of { f = 0 } ∩ B k ( R ) that intersect L ∪ L . Let A to be theregion P i which contains the top-left corner of B k ( R ), and set C = 1 (resp. C = 0) if f ispositive (resp. negative) on P i . Then iterate the following: – If A contains a path in B k ( R ) between its left and right sides terminate with out-put C . Figure 2. The final loop of the algorithm in the proof of the third statementof Lemma 3.6; this loop occurs when there is no left-right or top-bottom pathsin { f = 0 } ∩ B k ( R ). In this example the loop expands the area A three times inorder to determine the sign C of the crossing. – Change the value of C (from 0 to 1 or 1 to 0), and add to A the region P i that isadjacent to it.The final loop is illustrated in Figure 2; it terminates almost surely since there are a finitenumber of connected components of { f = 0 }∩ B k ( R ) (recall that f is C -smooth). Note that thealgorithm does not necessarily reveal all components of { f = 0 } inside B k ( R ) – any componentswhich are closed loops or only touch the top and right sides of B k ( R ) are not revealed – butthese do not affect whether Cross k ( R ) occurs.To estimate the revealments of this algorithm, a box S ∈ S r such that d ( S, B † k ( R )) < r is onlyrevealed if there is a level line in B k ( R ) between L ∩ L and a box adjacent to S , which impliesthe occurrence of (a translation of) the event A (2 r, d (cid:48) ), where d (cid:48) is the distance from the centreof S to L ∩ L . Since d (cid:48) is at least R − r , Rev( S ) ≤ P (cid:96) [ A (2 r, R − r )] as required. (cid:3) Proof of Propositions 3.1–3.3. Before proving Propositions 3.1 and 3.2 we give theanalogue of Proposition 2.10, which applies to continuous stationary Gaussian fields f = q (cid:63) W (note that we do not need to assume Assumption 1.4): Proposition 3.7. Suppose f = q (cid:63) W is continuous. Then for every (cid:96) ∈ R , event A , s > ,algorithm A ∈ A s that determines A , set of boxes S (cid:48) ⊆ S s , and ε ≥ , (cid:12)(cid:12) P (cid:96) (cid:2) f + ε (cid:88) S ∈S (cid:48) q (cid:63) S ∈ A (cid:3) − P (cid:96) [ f ∈ A ] (cid:12)(cid:12) ≤ εs d/ (cid:115) max (cid:110) P (cid:96) [ A ] , P (cid:96) (cid:2) f + ε (cid:88) S ∈S (cid:48) q (cid:63) S ∈ A (cid:3)(cid:111) E (cid:96) |W S (cid:48) | , where W S (cid:48) is the set of boxes in S (cid:48) that are revealed by A . In particular, if (POS) holds, (3.13) (cid:12)(cid:12) P (cid:96) + ε (cid:2) A (cid:3) − P (cid:96) [ f ∈ A ] (cid:12)(cid:12) ≤ εs d/ (cid:82) q (cid:112) max { P (cid:96) [ A ] , P (cid:96) + ε [ A ] } E (cid:96) |W| , where W is the set of all boxes in S s that are revealed by A .Proof. Consider S ∈ S s . We use the decomposition (see Proposition A.1 in the appendix) f S ( · ) d = Z S ( q (cid:63) S )( · ) s d/ + g S ( · ) , where Z S is a standard normal random variable and g S is a continuous Gaussian field indepen-dent of Z S , which implies also that f S ( · ) + ε ( q (cid:63) S )( · ) d = ( Z S + εs d/ )( q (cid:63) S )( · ) s d/ + g S ( · ) . PPER BOUNDS ON THE ONE-ARM EXPONENT 25 The same argument that led to (2.18) (this time with W equal to ( Z S ) S ∈W S(cid:48) in the order ofrevealment, and W (cid:48) containing Z S on S / ∈ S (cid:48) as well as g S for all S ) yields in this case (cid:12)(cid:12) P (cid:96) (cid:2) f + ε (cid:88) S ∈S (cid:48) q (cid:63) S ∈ A (cid:3) − P (cid:96) [ f ∈ A ] (cid:12)(cid:12) ≤ (cid:115) (cid:110) P (cid:96) [ A ] , P (cid:96) (cid:2) f + ε (cid:88) S ∈S (cid:48) q (cid:63) S ∈ A (cid:3)(cid:111) E (cid:96) |W S (cid:48) | D KL ( Z (cid:107) Z + εs d/ )where Z is a standard normal random variable. Since D KL ( Z (cid:107) Z + εs d/ ) = ε s d / (cid:80) S ∈S s ( q (cid:63) S ) = ( q (cid:63) )( x ) = (cid:82) q . Then set S (cid:48) = S s and replace ε (cid:55)→ ε/ (cid:82) q in the first statement. (cid:3) Proof of Proposition 3.1. This follows directly from (3.13) by considering the algorithm in Lemma 3.6that determines A (1 , R ) such that E (cid:96) |W| = (cid:88) S ∈S r Rev( S ) ≤ (cid:88) v ∈ r Z d ∩ Λ R +2 r P (cid:96) [Λ ←→ v + Λ r ] . (cid:3) Proof of Proposition 3.2. We begin with the general case d ≥ 2. We first partition the set ofboxes { S ∈ S r : d ( S, B k ( R )) < r } that cover B k ( R ) into the disjoint sets S (cid:48) = { S ∈ S r : d ( S, B + k ( R )) < r } and S (cid:48) = { S ∈ S r : d ( S, B k ( R )) < r } \ S (cid:48) Note that S (cid:48) and S (cid:48) correspond roughly to boxes which cover, respectively, the right-half B + k ( R )and its complement B k ( R ) \ B + k ( R ), except that we enforce disjointness (see Remark 3.8 for anexplanation) so we do not have exact reflective symmetry. However the reflection of S (cid:48) in thehyperplane { } × R d − is contained in S (cid:48) .By disjointness and since q is supported on Λ r , for every x ∈ B k ( R ) we have (cid:88) i =1 , (cid:88) S ∈S (cid:48) i ( q (cid:63) S )( x ) = (cid:88) S ∈S (cid:48) ∪S (cid:48) ( q (cid:63) S )( x ) = ( q (cid:63) )( x ) = (cid:90) q. Then by the multivariate chain rule for Dini derivatives ∂ + ∂(cid:96) P (cid:96) (cid:2) Cross k ( R ) (cid:3) = 1 (cid:82) q ∂ + ∂ε P (cid:96) (cid:2) f + ε (cid:88) i =1 , (cid:88) S ∈S (cid:48) i q (cid:63) S ∈ Cross k ( R ) (cid:3)(cid:12)(cid:12)(cid:12) ε =0 ≤ (cid:82) q (cid:88) i =1 , ∂ + ∂ε P (cid:96) (cid:2) f + ε (cid:88) S ∈S (cid:48) i q (cid:63) S ∈ Cross k ( R ) (cid:3)(cid:12)(cid:12)(cid:12) ε =0 . Now consider the algorithm in Lemma 3.6 that determines Cross k ( R ) such that, under P (cid:96) ,max S ∈S (cid:48) Rev( S ) ≤ P (cid:96) [ A (2 r, R − r )] . By reflective symmetry, there is also an algorithm determining Cross k ( R ) such that, under P (cid:96) ,max S ∈S (cid:48) Rev( S ) ≤ P (cid:96) [ A (2 r, R − r )] . Since also max i =1 , |S (cid:48) i | ≤ c ( R/r ) d for some c > k and d , applyingProposition 3.7 gives ∂ + ∂(cid:96) P (cid:96) (cid:2) Cross k ( R ) (cid:3) ≤ r d/ (cid:82) q (cid:113) c ( R/r ) d P (cid:96) [ A (2 r, R − r )] = c R d/ (cid:82) q (cid:112) P (cid:96) [ A (2 r, R − r )]for some c = c ( k, d ) > 0, as required.For d = 2 we consider the top-right quadrant B † k ( R ) and the algorithm in Lemma 3.6 thatdetermines Cross k ( R ) such thatmax { S ∈S r : d ( S,B † k ( R )) 0, it is not necessarily true that ∂ + ∂ε P (cid:96) (cid:2) f + εq (cid:63) S ∈ Cross k ( R ) (cid:3) ≥ S ∈ S r . Hence in the proof of Proposition 3.2 it was crucial that we partitioned { S ∈ S r : d ( S, B k ( R )) < r } disjointly into ∪ i S (cid:48) i , since otherwise we could not deduce that ∂ + ∂(cid:96) P (cid:96) (cid:2) Cross k ( R ) (cid:3) ≤ (cid:82) q ∂ + ∂ε P (cid:96) (cid:2) f + ε (cid:88) i =1 , (cid:88) S ∈S (cid:48) i q (cid:63) S ∈ Cross k ( R ) (cid:3)(cid:12)(cid:12)(cid:12) ε =0 . To prove Proposition 3.3 we need the analogue of Proposition 2.16. We say that an event A is compactly supported if A is measurable with respect to f | D for a compact D ⊂ R d , and isa continuity event if (cid:96) (cid:55)→ P (cid:96) [ f + g ∈ A ] is continuous for every smooth function g : R d → R ;for example, Cross k ( R ) and A i ( r, R ), i = 1 , 2, are compactly supported continuity events byLemma 3.13 below. Proposition 3.9. Suppose f = q (cid:63) W satisfies Assumption 1.4 and (BOU) , and let r > be such that q is supported on Λ r . Then there exists a c > depending only on d such that,for every (cid:96) ∈ R , increasing compactly supported continuity event A , s > , algorithm A ∈ A s determining A , and set of boxes S (cid:48) ⊆ S s , (3.14) d − d(cid:96) P (cid:96) [ A ] ≥ c min { , ( s/r ) d }(cid:107) q (cid:107) Var (cid:96) [ P (cid:96) [ A |F S (cid:48) ]]max S ∈S (cid:48) Rev ( S ) , where F S (cid:48) denotes the σ -algebra generated by ( f S ) S ∈S (cid:48) , and the revealments Rev ( S ) are under P (cid:96) .Remark . The proof of (3.14) shows that it can be strengthened by replacing d − d(cid:96) P (cid:96) [ A ] = d − dε P (cid:96) [ f + ε ∈ A ] with d − dε P (cid:96) [ f + εg S (cid:48) ∈ A ] where g S (cid:48) ( · ) := d ( · , S (cid:48) ) ≤ r , but we do not need this. Remark . In [38] a similar result (in the case S (cid:48) = S s ) was proven for an approximation ofthe field f in which the white noise is replaced with its discretisation at scale ε (cid:28) 1. However,since one needed to take ε (cid:28) d = 2 one needs ε (cid:28) /R if theevent is supported on B ( R )), this approach results in a prefactor ε d/ that decays rapidly in thescale of the event. Although this prefactor is also present in (3.14) as s → 0, the difference isthat one can work with fixed s .We prove Proposition 3.9 in Section 4 below. Let us complete the proof of Proposition 3.3: Proof of Proposition 3.3. For the first statement we apply Proposition 3.9 (in the case s = r , S (cid:48) = S s , and A = Cross k ( R ), which is a continuity event by Lemma 3.13) to the algorithm inLemma 3.6 that determines Cross k ( R ) whose revealments are bounded by rR R/r (cid:80) i =2 P (cid:96) [ A (2 r, ir )].For the second statement we follow the proof of Proposition 2.3 in the Bernoulli case, exceptusing Proposition 3.9 (in the case s = r , S (cid:48) = { S ∈ S s : d ( S, B † k ( R )) < r } , and A = Cross k ( R ))and the algorithm in Lemma 3.6 that determines Cross k ( R ) whose revealments on S (cid:48) are boundedby P (cid:96) [ A (2 r, R − r )]. To control the conditional variances in (3.14) we use the same argument asin the proof of Proposition 2.3; in particular the FKG inequality is available and, since R ≥ r ,the events B and B are independent as in the Bernoulli case. (cid:3) Remark . Similarly to in Section 2.4, combining Propositions 3.7 and 3.9 yields a generallower bound on the revealments of increasing events. We omit the proof, but the result is thefollowing. Suppose f = q (cid:63) W satisfies Assumption 1.4 and (POS)–(BOU). Let r be such that q is supported on Λ r , let (cid:96) ∈ R , let R ≥ r , let A be an increasing continuity event supported PPER BOUNDS ON THE ONE-ARM EXPONENT 27 on B ( R ), let s > 0, and let A ∈ A s be an algorithm determining A . Then there exists a c > d such thatmax S ∈S s Rev( S ) ≥ c (cid:0) ( (cid:82) q ) / (cid:107) q (cid:107) min { , ( s/r ) d } Var (cid:96) [ A ] (cid:1) / P (cid:96) [ A ] / R d/ where the revealments Rev( S ) are under P (cid:96) .One can also prove a lower bound on max S ∈S (cid:48) Rev( S ) for general S (cid:48) ⊂ S s , analogous toProposition 2.18, however in that case we would need q ≥ Proof of auxiliary results. To finish the section we prove Lemmas 3.4 and 3.5: Proof of Lemma 3.5. We first observe that g := f − f r = ( q − q r ) (cid:63) W is a C -smooth stationaryGaussian field satisfying E [ g (0) ] = (cid:90) x ∈ R d ( q − q r ) ( x ) dx = (cid:90) | x | >r/ q ( x ) (1 − ϕ ( | x/r | )) dx ≤ (cid:90) | x | >r/ q ( x ) ≤ c r d − β , for some c > | q ( x ) | ≤ c | x | − β by Assumption 1.4. Similarly, for everydirection v ∈ S , E [( ∂ v g (0)) ] = (cid:90) | x | >r/ ( ∂ v ( q ( x )(1 − ϕ ( | x/r | ))) dx ≤ c r d − β , for some c > ϕ , and we used that |∇ q ( x ) | ≤ c | x | − β by Assumption 1.4. Then by a Borell-TIS argument (see [38, Proposition 3.11]for the case d = 2, and the proof is identical in all dimensions) there exist c , c > R, r ≥ P [ (cid:107) f − f r (cid:107) ∞ ,B ( R ) > c (log R ) r − ( β − d/ ] ≤ c e − c (log R ) We also note the following consequence of (POS) which can be proved with a Cameron-Martinargument (see [38, Proposition 3.6] for the case d = 2, and the proof is identical in all dimensions):there exists a c > R ≥ 1, increasing event A (cid:48) measurable with respect to f | B ( R ) , (cid:96) ∈ R and t > P (cid:96) [ { f + t ∈ A (cid:48) } \ { f ∈ A (cid:48) } ] = P (cid:96) [ f + t ∈ A (cid:48) ] − P (cid:96) [ f ∈ A (cid:48) ] ≤ c tR d/ . We now complete the proof, for which we may assume that (cid:96) = 0. Consider A = A ∩ A where A is increasing, A is decreasing, and both A and A are measurable with respect to f | B ( R ) . Abbreviate t = c (log R ) r − ( β − d/ and define E = {(cid:107) f − f r (cid:107) ∞ ,B ( R ) > t } . Then P [ f r ∈ A ∩ A ] ≤ P [ f r ∈ A ∩ A ∩ E c ] + P [ E ] ≤ P [ { f + t ∈ A } ∩ { f − t ∈ A } ] + P [ E ] ≤ P [ f ∈ A ∩ A ] + P [ { f + t ∈ A } \ { f ∈ A } ] + P [ { f − t ∈ A } \ { f ∈ A } ] + P [ E ] ≤ P [ f ∈ A ∩ A ] + 2 c tR d/ + c e − c (log R ) where in the second inequality we used that A (resp. A ) is increasing (resp. decreasing) andmeasurable with respect to f | B ( R ) , and the final inequality was by (3.15) and (3.16). Similarly P [ f r ∈ A ∩ A ] ≥ P [ { f − t ∈ A } ∩ { f + t ∈ A } ∩ E c ] ≥ P [ f ∈ A ∩ A ] − P [ { f ∈ A } \ { f − t ∈ A } ] − P [ { f ∈ A } \ { f + t ∈ A } ] − P [ E ] ≥ P [ f ∈ A ∩ A ] − c tR d/ − c e − c (log R ) which gives the result. (cid:3) Proof of Lemma 3.4. For the first statement, it is enough to prove that(3.17) lim inf R →∞ P (cid:96) c [Cross ( R )] > since then the result follows by the continuity of (cid:96) (cid:55)→ P (cid:96) [Cross ( R )] (by Lemma 3.13 below forinstance). By a classical bootstrapping argument [28, Section 5.1] and Lemma 3.5, there are c , ε > P (cid:96) [Cross (3 R )] ≤ c (cid:0) P (cid:96) [Cross ( R )] + R − ε (cid:1) for (cid:96) ∈ R and R sufficiently large. A consequence of (3.18) and the continuity of (cid:96) (cid:55)→ P (cid:96) [Cross ( R )] is thatlim inf R →∞ P (cid:96) [Cross ( R )] < /c = ⇒ lim inf R →∞ P (cid:96) (cid:48) [Cross ( R )] = 0 for some (cid:96) (cid:48) > (cid:96). Covering the annulus Λ R \ Λ R with 2 d symmetric copies of B ( R ), one can find a finite collectionof copies A i of Cross ( R ) such that { Λ ←→ ∞} ⊆ A (3 R, R ) ⊆ ∪ i A i . Hence we also havelim inf R →∞ P (cid:96) (cid:48) [Cross ( R )] = 0 = ⇒ P (cid:96) (cid:48) [Λ ←→ ∞ ] = 0 = ⇒ (cid:96) (cid:48) ≤ (cid:96) c , and so we deduce (3.17).For the second statement we refer to [38] where it is shown that the RSW estimates holdunder Assumption 1.4 and (POS’) (indeed the recent work [30] shows that the correlation decayin Assumption 1.4 is not even needed). (cid:3) We also state a continuity result that we used in the section: Lemma 3.13. Let f be a C -smooth Gaussian field on R d such that ( f ( x ) , ∇ f ( x ) , ∇ f ( x )) isnon-degenerate for every x ∈ R d . Then for every k ≥ and R ≥ r > , P (cid:96) [ Cross k ( R )] and P (cid:96) [ A i ( r, R )] , i = 1 , are continuous functions of (cid:96) ∈ R .Proof. Since f is C -smooth and ( f ( x ) , ∇ f ( x ) , ∇ f ( x )) is non-degenerate, by Bulinskaya’s lemma[1, Lemma 11.2.10] the critical points of f , as well as its restriction to a smooth hypersurface, arealmost surely locally finite and have distinct critical levels. Since the events { f + (cid:96) ∈ Cross k ( R ) } and { f + (cid:96) ∈ A i ( r, R ) } depend only on the (stratified) diffeomorphism class of the level set { f + (cid:96) = 0 } restricted to, respectively, B k ( R ) and Λ R \ Λ r , by the (stratified) Morse lemma [25,Theorem 7] almost surely there is a δ > { f + (cid:96) + s ∈ Cross k ( R ) } and { f + (cid:96) + s ∈ A i ( r,R ) } areconstant on s ∈ ( − δ, δ ), which is equivalent to the claimed continuity. (cid:3) The OSSS inequality for smooth Gaussian fields and applications In this section we establish a new Russo-type inequality for smooth Gaussian fields which weuse to prove Proposition 3.9, with Theorem 1.14 following as an application. We consider a field f = q (cid:63) W which is C -smooth and satisfies (BOU), and let r > q is supportedon Λ r . In particular this implies that ( f (0) , ∇ f (0) , ∇ f (0)) is non-degenerate. We emphasisethat in this section neither (POS) nor (POS’) play any role.As in Section 3.1, fix s > f = (cid:80) S ∈S s f S where f S ( · ) = ( q (cid:63) W | S )( · ) = (cid:90) y ∈ R d q ( · − y ) dW S ( y ) = (cid:90) y ∈ S q ( · − y ) dW ( y ) . The proof of Proposition 3.9 is based on an application of the OSSS inequality (Theorem 2.15)to the independent fields ( f S ) S ∈S s . In this context the resampling influences (c.f. (2.21)) aredefined, for each S ∈ S s , as Infl A ( S ) := P (cid:96) (cid:2) { f ∈ A } (cid:54) = { f ( S ) ∈ A } (cid:3) where f ( S ) denotes the field f = (cid:80) S ∈S s f S with f S resampled. Just as for other recent ap-plications of the OSSS inequality in percolation theory [16, 15, 17], the crucial mechanism isthat d − d(cid:96) P (cid:96) [ A ] is bounded below by the sum of the resampling influences. Recall the definition ofcompactly supported continuity events from the statement of Proposition 3.9. PPER BOUNDS ON THE ONE-ARM EXPONENT 29 Proposition 4.1 (Russo-type inequality) . There exists a constant c > depending only on d such that, for every (cid:96) ∈ R , s > , and increasing compactly supported continuity event A , d − d(cid:96) P (cid:96) [ A ] ≥ c min { , ( s/r ) d }(cid:107) q (cid:107) (cid:88) S ∈S s Infl A ( S ) where the resampling influences Infl A ( S ) are under P (cid:96) . Before proving Proposition 4.1, let us complete the proof of Proposition 3.9. Proof of Proposition 3.9. The OSSS inequality (Theorem 2.15), combined with the reasoningleading to (2.22), gives Var (cid:96) [ P (cid:96) [ A |F S (cid:48) ]] ≤ (cid:88) S ∈S (cid:48) Rev( S )Infl A ( S )and hence (true for arbitrary event A ) (cid:88) S ∈S s Infl A ( S ) ≥ (cid:88) S ∈S (cid:48) Infl A ( S ) ≥ (cid:96) [ P (cid:96) [ A |F S (cid:48) ]]max S ∈S (cid:48) Rev( S ) . Combining with Proposition 4.1 yields the result. (cid:3) The main idea in the proof of Proposition 4.1, which distinguishes it from the discretisationapproach in [38], is to use an orthonormal decomposition of each f S to interpret d − d(cid:96) P (cid:96) [ A ] andthe resampling influences Infl A ( S ) as measuring, respectively, the ‘boundary’ and ‘volume’ ofcertain sets in Gaussian space. Then we can apply Gaussian isoperimetry to deduce the result.For a set E ⊂ R n we denote E + ε := { x ∈ R n : there exists y ∈ E s.t. | x − y | ≤ ε } to be the ε -thickening of E . Proposition 4.2 (Gaussian isoperimetry) . There exists a constant c > such that, for everymeasurable E ⊂ R n and ε ≥ , P [ X ∈ E + ε \ E ] ≥ (cid:114) π P [ X ∈ E ](1 − P [ X ∈ E ]) ε − cε where X is an n -dimensional standard Gaussian vector.Proof. Let ϕ ( x ) and Φ( x ) denote the standard normal pdf and cdf respectively. The classicalGaussian isoperimetric inequality states thatlim inf ε ↓ ε − P [ X ∈ E + ε \ E ] ≥ ϕ (Φ − ( P [ X ∈ E ])) . A simple consequence (see, e.g., [33, Eq. (3)]) is that, for any ε ≥ P [ X ∈ E + ε ] ≥ Φ(Φ − ( P [ X ∈ E ]) + ε ) . By Taylor expanding Φ on the right-hand side of (4.1) we have P [ X ∈ E + ε \ E ] ≥ εϕ (Φ − ( P [ X ∈ E ])) − 12 sup x ∈ R | ϕ (cid:48) ( x ) | ε , and the result follows since, for all x ∈ R , ϕ ( x ) ≥ (cid:113) π Φ( x )(1 − Φ( x ))(as can be seen from thefact that the Mill’s ratio (1 − Φ( x )) /ϕ ( x ) is decreasing on x ≥ | ϕ (cid:48) ( x ) | is uniformlybounded on x ∈ R . (cid:3) We use the following orthogonal decomposition of f S (see Proposition A.1 in the appendix).Let Z = ( Z i ) i ≥ be a sequence of i.i.d. standard normal random variables and let ( ϕ i ) i ≥ be anorthonormal basis of L ( S ). Then(4.2) f nS := n (cid:88) i ≥ Z i ( q (cid:63) ϕ i ) ⇒ f S , in law with respect to the C -topology. Proof of Proposition 4.1. By linear rescaling, we may suppose without loss of generality that (cid:96) = 0, (cid:107) q (cid:107) = 1, and that q is supported on Λ (i.e. r = 1). For each S ∈ S s , let g S : R d → [0 , 1] bea smooth function such that g S ( x ) = 1 on { x : d ( x, S ) ≤ } and g S ( x ) = 0 on { x : d ( x, S ) ≥ } .Then (cid:80) S ∈S s g S ( x ) ≤ c max { , s − d } for some constant c > d . Therefore,since A is increasing, and by the multivariate chain rule for Dini derivatives,(4.3) d − dε P [ f + ε ∈ A ] (cid:12)(cid:12)(cid:12) ε =0 ≥ c max { , s − d } (cid:88) S ∈S s d − dε P [ f + εg S ∈ A ] (cid:12)(cid:12)(cid:12) ε =0 . For each S ∈ S s , let f (cid:48) S denote an independent copy of f S , define h S = f − f S , and let F h S bethe σ -algebra generated by h S . We next claim that, almost surely over F h S ,(4.4) d − dε P [ f S + h S + εg S ∈ A |F h S ] (cid:12)(cid:12)(cid:12) ε =0 ≥ c P [ { f S + h S ∈ A } (cid:54) = { f (cid:48) S + h S ∈ A } |F h S ]for some universal c > 0. Together with (4.3), this will complete the proof of Proposition 4.1since d − dε P [ f S + εg S ∈ A ] (cid:12)(cid:12)(cid:12) ε =0 ≥ E (cid:104) d − dε P [ f S + h S + εg S ∈ A |F h S ] (cid:12)(cid:12)(cid:12) ε =0 (cid:105) ≥ c E (cid:2) P [ { f S + h S ∈ A } (cid:54) = { f (cid:48) S + h S ∈ A } |F h S ] (cid:3) =: c Infl A ( S ) . where the first inequality is Fatou’s lemma, and the second inequality is by (4.4).It remains to prove (4.4). Henceforth we fix S ∈ S s , condition on h S , and drop F h S fromthe notation. Let ( ϕ i ) i ≥ be an orthonormal basis of L ( S ) and recall the decomposition (4.2).Fixing n ∈ N and viewing { f nS + h S ∈ A } as a Borel set E in the n -dimensional Gaussian spacegenerated by the standard Gaussian vector Z n = ( Z i ) ≤ i ≤ n , by Proposition 4.2(4.5) P [ Z n ∈ E + ε \ E ] ≥ c ε P [ Z n ∈ E ](1 − P [ Z n ∈ E ]) − c ε for some c , c > ε ≥ 0. Consider y = ( y i ) ∈ R n such that (cid:107) y (cid:107) = ε . By Young’sconvolution inequality, and since ϕ i are an orthonormal basis, (cid:13)(cid:13)(cid:13) (cid:88) i ≤ n y i ( q (cid:63) ϕ i ) (cid:13)(cid:13)(cid:13) ∞ ≤ (cid:107) q (cid:107) (cid:13)(cid:13)(cid:13) (cid:88) i ≤ n y i ϕ i (cid:13)(cid:13)(cid:13) = (cid:107) y (cid:107) = ε. Since q (cid:63) ϕ i is supported on { x : d ( x, S ) ≤ } , and recalling that g S ( · ) := d ( · ,S ) ≤ , this givessup y : | y | ≤ ε (cid:88) i ≤ n ( Z i + y i )( q (cid:63) ϕ i ) − f nS = sup y : | y | ≤ ε (cid:88) i ≤ n y i ( q (cid:63) ϕ i ) ≤ εg S . Therefore, since A is increasing, P [ f nS + h S + εg S ∈ A ] − P [ f nS + h S ∈ A ] ≥ P [ ∪ y : | y |≤ ε { Z n + y ∈ E } ] − P [ Z n ∈ E ]= P [ Z n ∈ E + ε \ E ] . Combining with (4.5),(4.6) P [ f nS + h S + εg S ∈ A ] − P [ f nS + h S ∈ A ] ≥ c ε P [ f nS + h S ∈ A ](1 − P [ f nS + h S ∈ A ]) − c ε . It remains to prove that almost surely (with respect to h S ), as n → ∞ ,(4.7) P [ f nS + h S ∈ A ] → P [ f S + h S ∈ A ] and P [ f nS + h S + εg S ∈ A ] → P [ f S + h S + εg S ∈ A ] , since then sending n → ∞ in (4.6) yields P [ f S + h S + εg S ∈ A ] − P [ f S + h S ∈ A ] ≥ c ε P [ f S + h S ∈ A ](1 − P [ f S + h S ∈ A ]) − c ε , which gives (4.4) after sending ε → A is an increasing continuity event; this means that foralmost every f = f S + h S there exists δ > { f S + h S + s ∈ A } and { f S + h S + εg S + s ∈ A } PPER BOUNDS ON THE ONE-ARM EXPONENT 31 are constant for s ∈ ( − δ, δ ). Then since f nS → f S in law with respect to the C -topology, wehave (4.7) (by the Portmanteau lemma for instance). (cid:3) Remark . Note that in the proof of Proposition 4.1 we did not require that the Borel set E inthe n -dimensional Gaussian space generated by Z n be increasing, since Gaussian isoperimetryis valid for arbitrary sets. This allows us to avoid any requirement that q (cid:63) ϕ i be a positivefunction, in contrast to the discretisation approach in [38].4.1. Application to the sharpness of the phase transition for finite-range Gaussianfields. We conclude the section by proving Theorem 1.14, following the approach in [16]. Forthis we only need the special case s = r and S (cid:48) = S s of Proposition 3.9. Proof of Theorem 1.14. By linear rescaling and adjusting constants, we may assume withoutloss of generality that q is supported on Λ , and prove the theorem for Λ replacing Λ .For R ≥ g R ( (cid:96) ) := P (cid:96) [ A (2 , R )] (recall that this means g R ( (cid:96) ) := 1 if R ∈ [0 , g R := lim R →∞ g R ( (cid:96) ) = P (cid:96) [Λ ←→ ∞ ]. We will first establish the differential inequality(4.8) d − d(cid:96) g R ( (cid:96) ) ≥ c g R ( (cid:96) )(1 − g R ( (cid:96) ) R (cid:80) R − i =0 g i ( (cid:96) )for some c > 0, every R sufficiently large, and every (cid:96) ∈ R . Recall the notation from thebeginning of the proof of Lemma 3.6 and for R ≥ A (essentially taken from [16]): • Draw a random integer i uniformly in [2 , R ], and reveal every box that intersects ∂ Λ i ,as well as all adjacent boxes. • Iterate the following steps: – Let W ⊂ S be the boxes that have been revealed. – Identify the set U ⊆ ∂ + (int( W )) such that, for each S ∈ U , there is a primal pathcontained in int( W ) ∩ Λ R between ∂ Λ i and the boundary of S . – If U is empty end the loop. Otherwise reveal the boxes in ∂ + U \ W . • If int( W ) contains a primal path between Λ and Λ R output 1, otherwise output 0.This algorithm determines A (2 , R ) since int( W ) eventually contains all the components of { f ≥ } ∩ Λ R that intersect ∂ Λ i , and any primal path between Λ and Λ R must intersect ∂ Λ i .To estimate the revealments Rev( S ) under P (cid:96) , note that a box S is revealed if and only if either(i) it intersects, or is adjacent to a box that intersects, ∂ Λ i , or (ii) there is a primal path in Λ R between ∂ Λ i and a box adjacent to S . If d (cid:48) denotes the distance from the centre of S to Λ i , thisimplies the occurrence of (a translation of) the event A (2 , d (cid:48) ). Averaging on i ∈ [2 , R ], we haveRev( S ) ≤ R − (cid:16) R − (cid:88) i =3 P (cid:96) [ A (2 , i )] (cid:17) ≤ R − R − (cid:88) i =0 g i ( (cid:96) ) ≤ R R − (cid:88) i =0 g i ( (cid:96) )for sufficiently large R . Applying Proposition 3.9 (with s = 1 and S (cid:48) = S , recalling that A (2 , R ) is a continuity event by Lemma 3.13) gives that d − d(cid:96) g R ( (cid:96) ) ≥ c g R ( (cid:96) )(1 − g R ( (cid:96) ))max S ∈S Rev( S ) ≥ c g R ( (cid:96) )(1 − g R ( (cid:96) ) R (cid:80) R − i =0 g i ( (cid:96) )for some c > R , which gives (4.8).We now argue that (4.8) implies the result. First assume that there exists a (cid:96) > (cid:96) c such that g ( (cid:96) ) < f satisfies (POS’), since then P [inf x ∈ Λ f ( x ) ≥ (cid:96) ] > (cid:96) ∈ R ,but not in general). Then by monotonicity 1 − g (cid:96) ( R ) > (1 − g ( (cid:96) ) / (cid:96) < (cid:96) and large R .Hence setting c = c (1 − g ( (cid:96) )) / > f R ( (cid:96) ) = g R ( (cid:96) ) /c we have d − d(cid:96) f R ( (cid:96) ) ≥ f R ( (cid:96) ) R (cid:80) R − i =0 f i ( (cid:96) ) . for all (cid:96) < (cid:96) and large R , and applying [16, Lemma 3.1] yields the result. On the other hand,if g ( (cid:96) ) = 1 for every (cid:96) > (cid:96) c then the second statement of the theorem is immediate. To provethe first statement, instead choose a (cid:96) < (cid:96) c and repeat the above argument. This implies thestatement for (cid:96) < (cid:96) , and taking (cid:96) ↑ (cid:96) c gives the claim. (cid:3) Appendix A. Orthogonal decomposition of f S For completeness we present a classical orthogonal decomposition of the Gaussian field f S ( · ) = ( q (cid:63) W | S )( · ) = (cid:90) y ∈ S q ( · − y ) dW ( y )where S ⊂ R d is a compact domain, q ∈ L ( R d ), and W is the white noise on R d . In this sectionwe shall assume only that f S is continuous, all other conditions on q being irrelevant. Proposition A.1 (Orthogonal decomposition of f S ) . Let ( Z i ) i ≥ be a sequence of i.i.d. standardnormal random variables and let ( ϕ i ) i ≥ be an orthonormal basis of L ( S ) . Then, as n → ∞ , f nS := n (cid:88) i ≥ Z i ( q (cid:63) ϕ i ) ⇒ f S in law with respect to the C -topology on compact sets. In particular, f S ( · ) d = Z ( q (cid:63) S )( · ) (cid:112) Vol ( S ) + g ( · ) where g is an continuous Gaussian field independent of Z .Proof. Remark that, for each x ∈ R d , f nS ( x ) ⇒ f S ( x ) in law since they are centred Gaussianrandom variables and E (cid:104)(cid:16) n (cid:88) i ≥ Z i ( q (cid:63) ϕ i )( x ) (cid:17) (cid:105) = n (cid:88) i ≥ (cid:16) (cid:90) S q ( x − s ) ϕ i ( s ) dx (cid:17) → (cid:90) S q ( x − s ) dx = E [ f S ( x ) ]by Parseval’s identity. Note also that the functions q (cid:63) ϕ i are continuous (as a convolution of L functions), and so each f nS is continuous. Hence the first statement of the proposition follows byan application of Lemma A.2 below. For the second statement, set ϕ to be constant on S . (cid:3) Lemma A.2. Let ( f i ) i ≥ be a sequence of independent continuous centred Gaussian fields on R d and define g n := (cid:80) ni ≥ f i . Suppose there exists a continuous Gaussian field g on R d such that,for each x ∈ R d , g n ( x ) ⇒ g ( x ) in law. Then g n ⇒ g in law with respect to the C -topology oncompact sets.Proof. We follow the proof of [1, Theorem 3.1.2]. Since g n ( x ) is a sum of independent randomvariables converging in law, by Levy’s equivalence theorem we may define g ( x ) as the almost surelimit of g n ( x ). Fix a compact set Ω ⊂ R d , and consider ( g n ) n ≥ as elements of the Banach space C (Ω) of continuous functions on Ω equipped with the C -topology. By the Itˆo-Nisio theorem [1,Theorem 3.1.3], it suffices to show that (cid:90) Ω g n dµ → (cid:90) Ω g dµ in mean (and so in probability) for every finite signed Borel measure µ on Ω. Define thecontinuous functions u n ( x ) := E [ g n ( x ) ] and u ( x ) := E [ g ( x ) ]. Then E (cid:104)(cid:12)(cid:12)(cid:12) (cid:90) Ω g dµ − (cid:90) Ω g n dµ (cid:12)(cid:12)(cid:12)(cid:105) ≤ (cid:90) Ω (cid:16) E (cid:2)(cid:0) g ( x ) − g n ( x ) (cid:1) (cid:3)(cid:17) / | µ | ( dx ) ≤ (cid:90) Ω (cid:16) u ( x ) − u n ( x ) (cid:17) / | µ | ( dx ) . Since u n → u monotonically, by Dini’s theorem the convergence is uniform on Ω, so we havethat E [ | (cid:82) Ω g dµ − (cid:82) Ω g n dµ | ] → (cid:3) Although this lemma is stated for differentiable functions, it is easy to check that the proof goes throughwithout differentiability since it only uses f ( b ) − f ( a ) ≥ (cid:82) ba d − dx f ( x ) dx . PPER BOUNDS ON THE ONE-ARM EXPONENT 33 References [1] R.J. Adler and J.E. Taylor. Random fields and geometry . Springer, 2007.[2] M. Aizenman and D.J. Barsky. Sharpness of the phase transition in percolation models. Comm. Math. Phys. ,108(3):489–526, 1987.[3] M. Aizenman and C.M. Newman. Tree graph inequalities and critical behavior in percolation models. J. Stat.Phys. , 36:107–143, 1984.[4] V. Beffara and D. Gayet. Percolation of random nodal lines. Publ. Math. IHES , 126(1):131–176, 2017.[5] D. Beliaev, M. McAuley, and S. Muirhead. Fluctuations in the number of excursion sets of planar gaussianfields. arXiv preprint arXiv:1908.10708 , 2019.[6] D. Beliaev, M. McAuley, and S. Muirhead. Smoothness and monotonicity of the excursion set density ofplanar gaussian fields. Electron. J. Probab. , 25(93):1–37, 2020.[7] D. Beliaev, S. Muirhead, and A. Rivera. A covariance formula for topological events of smooth Gaussianfields. Ann. Probab. (to appear) , 2020.[8] I. Benjamini, O. Schramm, and D.B. Wilson. Balanced Boolean functions that can be evaluated so that everyinput bit is unlikely to be read. In STOC’05: Proceedings of the 37th Annual ACM Symposium on Theoryof Computing , pages 244–250, 2005.[9] C. Borgs, J. T. Chayes, H. Kesten, and J. Spencer. Uniform boundedness of critical crossing probabilitiesimplies hyperscaling. Random Struct. Algo. , 15(3–4):368–413, 1999.[10] J. T. Chayes and L. Chayes. Finite-size scaling and correlation lengths for disordered systems. Phys. Rev.Lett. , 57(24):2999–3002, 1986.[11] J. T. Chayes and L. Chayes. Inequality for the infinite-cluster density in Bernoulli percolation. Phys. Rev.Lett. , 56(16):1619–1622, 1986.[12] V. Dewan and D. Gayet. Random pseudometrics and applications. arXiv preprint arXiv:2004.05057 , 2020.[13] H. Duminil-Copin, S. Goswami, P.-F. Rodriguez, and F. Severo. Equality of critical parameters for percolationof Gaussian free field level-sets. arXiv preprint arXiv:2002.07735 , 2020.[14] H. Duminil-Copin, I. Manolescu, and V. Tassion. Planar random-cluster model: fractal properties of thecritical phase. arXiv preprint arXiv:2007.14707 , 2020.[15] H. Duminil-Copin, A. Raoufi, and V. Tassion. Exponential decay of connection probabilities for subcriticalVoronoi percolation in R d . Probab. Theory Related Fields , 173(1–2):479–490, 2019.[16] H. Duminil-Copin, A. Raoufi, and V. Tassion. Sharp phase transition for the random-cluster and Potts modelsvia decision trees. Ann. Math. , 189(1):75–99, 2019.[17] H. Duminil-Copin, A. Raoufi, and V. Tassion. Subcritical phase of d -dimensional Poisson-Boolean percolationand its vacant set. Ann. H. Lebesgue , 3:677–700, 2020.[18] H. Duminil-Copin and V. Tassion. A new proof of the sharpness of the phase transition for Bernoulli perco-lation and the Ising model. Commm. Math. Phys , 343:725–745, 2016.[19] W. Ehm, T. Gneiting, and D. Richards. Convolution roots of radial positive definite function with compactsupport. Trans. Am. Math. Soc. , 356(11):4655–4685, 2004.[20] R. Fitzner and R. van der Hofstad. Mean-field behavior for nearest-neighbor percolation in d > Electron.J. Probab. , 22:65 pp., 2017.[21] C. Garban, G. Pete, and O. Schramm. The Fourier spectrum of critical percolation. Acta Math. , 205(1):19–104, 2010.[22] C. Garban and H. Vanneuville. Bargmann-Fock percolation is noise sensitive. arXiv preprintarXiv:1906.02666 , 2019.[23] G.R. Grimmett. Percolation . Springer, 1999.[24] J.M. Hammersley. Percolation processes: Lower bounds for the critical probability. Ann. Math. Statist. ,28:790–795, 1957.[25] D.G. Handron. Generalized billiard paths and Morse theory for manifolds with corners. Topology Appl. ,126(1-2):83–118, 2002.[26] T. Hara. Mean-field critical behaviour for correlation length for percolation in high dimensions. Probab.Theory Related Fields , 86:337–385, 1990.[27] T. Hara. Decay of correlations in nearest-neighbor self-avoiding walk, percolation, lattice trees and animals. Ann. Probab. , 36(2):530–593, 2008.[28] H. Kesten. Percolation theory for mathematicians . Progress in Probability and Statistics Vol. 2. Springer,1982.[29] H. Kesten. Scaling relations for 2D-percolation. Comm. Math. Phys , 109:109–156, 1987.[30] L. K¨ohler-Schindler and V. Tassion. Crossing probabilities for planar percolation. arXiv preprintarXiv:2011.04618 , 2020.[31] G. Kozma and A. Nachmias. Arm exponents in high dimensional percolation. J. Amer. Math. Soc. , 24(2):375–409, 2011.[32] S. Kullback. Information theory and statistics . Dover, 1978.[33] M. Ledoux. A short proof of the Gaussian isoperimetric inequality. In E. Eberlein, M. Hahn, and M. Ta-lagrand, editors, High Dimensional Probability. Progress in Probability, vol 43. , pages 229–232. Birkh¨auser,Basel, 1998. [34] M. Menshikov. Coincidence of critical points in percolation problems. Sov. Math. Dokl. , 33:856–859, 1986.[35] S.A. Molchanov and A.K. Stepanov. Percolation in random fields. I. Theor. Math. Phys. , 55(2):478–484, 1983.[36] S.A. Molchanov and A.K. Stepanov. Percolation in random fields. II. Theor. Math. Phys. , 55(3):592–599,1983.[37] S. Muirhead, A. Rivera, and H. Vanneuville (with an appendix by L. K¨ohler-Schindler). The phase transitionfor planar Gaussian percolation models without FKG. arXiv preprint arXiv:2010.11770 , 2020.[38] S. Muirhead and H. Vanneuville. The sharp phase transition for level set percolation of smooth planarGaussian fields. Ann. I. Henri Poincar´e Probab. Stat. (to appear) , 2020.[39] R. O’Donnell, M. Saks, O. Schramm, and R.A. Servedio. Every decision tree has an influential variable. In , pages 31–39, 2005.[40] R. O’Donnell and R.A. Servedio. Learning monotone decision trees in polynomial time. SIAM J. Comput. ,37(3):827–844, 2007.[41] L.D. Pitt. Positively correlated normal variables are associated. Ann. Probab. , 10(2):496–499, 1982.[42] A. Rivera. Talagrand’s inequality in planar Gaussian field percolation. arXiv preprint arXiv:1905.13317 ,2019.[43] A. Rivera and H. Vanneuville. Quasi-independence for nodal lines. Ann. Inst. H. Poincar´e Probab. Statist. ,55(3):1679–1711, 2019.[44] A. Rivera and H. Vanneuville. The critical threshold for Bargmann-Fock percolation. Ann. Henri Lebesgue ,3, 2020.[45] W. Rudin. An extension theorem for positive-definite functions. Duke Math. J. , 37(1):49–53, 1970.[46] O. Schramm and S. Smirnov (with an appendix by C. Garban). On the scaling limits of planar percolation. Ann. Probab. , 39(5):1768–1814, 2011.[47] S. Smirnov and W. Werner. Critical exponents for two-dimensional percolation. Math. Res. Lett. , 8(5):729–744, 2001.[48] H. Tasaki. Hyperscaling inequalities for percolation. Comm. Math. Phys , 113(1):49–65, 1987.[49] R. van den Berg and H. Don. A lower bound for point-to-point connection probabilities in critical percolation.