Comparison of quenched and annealed invariance principles for random conductance model: Part II
CCOMPARISON OF QUENCHED AND ANNEALED INVARIANCEPRINCIPLES FOR RANDOM CONDUCTANCE MODEL: PART II
MARTIN BARLOW, KRZYSZTOF BURDZY AND AD ´AM TIM ´AR
Abstract.
We show that there exists an ergodic conductance environment such that theweak (annealed) invariance principle holds for the corresponding continuous time randomwalk but the quenched invariance principle does not hold. In the present paper we give aproof of the full scaling limit for the weak invariance principle, improving the result in anearlier paper where we obtained a subsequential limit. Introduction
This article contains the completion of the project started in a previous paper [4], wherewe proved that there exists an ergodic conductance environment such that the weak (an-nealed) invariance principle holds for the corresponding continuous time random walk alonga subsequence but the quenched invariance principle does not hold. In the present paper wegive a proof of the full scaling limit for the weak invariance principle, improving the result in[4]. The improved result is, in a sense, a quantitative form of the invariance principle. Theproof consists of several lemmas. Some of them are specific to our model but some of themhave the more general character and may serve as technical elements for related projects.Since this paper is a continuation of [4], we start by presenting basic notation and definitionsfrom that paper.Let d ≥ E d be the set of all non oriented edges in the d -dimensional integerlattice, that is, E d = { e = { x, y } : x, y ∈ Z d , | x − y | = 1 } . Let { µ e } e ∈ E d be a random processwith non-negative values, defined on some probability space (Ω , F , P ). The process { µ e } e ∈ E d represents random conductances. We write µ xy = µ yx = µ { x,y } and set µ xy = 0 if { x, y } / ∈ E d .Set µ x = (cid:88) y µ xy , P ( x, y ) = µ xy µ x , with the convention that 0 / P ( x, y ) = 0 if { x, y } / ∈ E d . For a fixed ω ∈ Ω, let X = { X t , t ≥ , P xω , x ∈ Z d } be the continuous time random walk on Z d , with transitionprobabilities P ( x, y ) = P ω ( x, y ), and exponential waiting times with mean 1 /µ x . The corre-sponding expectation will be denoted E xω . For a fixed ω ∈ Ω, the generator L of X is givenby L f ( x ) = (cid:88) y µ xy ( f ( y ) − f ( x )) . (1.1)In [3] this is called the variable speed random walk (VSRW) among the conductances µ e . Thismodel, of a reversible (or symmetric) random walk in a random environment, is often calledthe Random Conductance Model. Research supported in part by NSF Grant DMS-1206276, by NSERC, Canada, and Trinity College, Cam-bridge, and by MTA R´enyi ”Lendulet” Groups and Graphs Research Group. a r X i v : . [ m a t h . P R ] D ec MARTIN BARLOW, KRZYSZTOF BURDZY AND AD ´AM TIM ´AR
We are interested in functional Central Limit Theorems (FCLTs) for the process X . Givenany process X , for ε >
0, set X εt = εX t/ε , t ≥
0. Let D T = D ([0 , T ] , R d ) denote theSkorokhod space, and let D ∞ = D ([0 , ∞ ) , R d ). Write d S for the Skorokhod metric and B ( D T )for the σ -field of Borel sets in the corresponding topology. Let X be the canonical processon D ∞ or D T , P BM be Wiener measure on ( D ∞ , B ( D ∞ )) and let E BM be the correspondingexpectation. We will write W for a standard Brownian motion. It will be convenient toassume that { µ e } e ∈ E d are defined on a probability space (Ω , F , P ), and that X is defined on(Ω , F ) × ( D ∞ , B ( D ∞ )) or (Ω , F ) × ( D T , B ( D T )). We also define the averaged or annealedmeasure P on ( D ∞ , B ( D ∞ )) or ( D T , B ( D T )) by(1.2) P ( G ) = E P ω ( G ) . Definition 1.1.
For a bounded function F on D T and a constant matrix Σ, let Ψ Fε = E ω F ( X ε ) and Ψ F Σ = E BM F (Σ W ). We will use I to denote the identity matrix.(i) We say that the Quenched Functional CLT (QFCLT) holds for X with limit Σ W if forevery T > F on D T we have Ψ Fε → Ψ F Σ as ε → P -probability 1.(ii) We say that the Weak Functional CLT (WFCLT) holds for X with limit Σ W if for every T > F on D T we have Ψ Fε → Ψ F Σ as ε →
0, in P -probability.(iii) We say that the Averaged (or Annealed) Functional CLT (AFCLT) holds for X withlimit Σ W if for every T > F on D T we have E Ψ Fε → Ψ F Σ . This is the same as standard weak convergence with respect to the probabilitymeasure P .If we take Σ to be non-random then, since F is bounded, it is immediate that QFCLT ⇒ WFCLT. In general for the QFCLT the matrix Σ might depend on the environment µ · ( ω ).However, if the environment is stationary and ergodic, then Σ is a shift invariant functionof the environment, so must be P –a.s. constant. In [9] it is proved that if µ e is a stationaryergodic environment with E µ e < ∞ then the WFCLT holds. In [4, Theorem 1.3] it is provedthat for the random conductance model the AFCLT and WFCLT are equivalent. Definition 1.2.
We say an environment ( µ e ) on Z d is symmetric if the law of ( µ e ) is invariantunder symmetries of Z d .If ( µ e ) is stationary, ergodic and symmetric, and the WFCLT holds with limit Σ W thenthe limiting covariance matrix Σ T Σ must also be invariant under symmetries of Z d , so mustbe a constant times the identity.In a previous paper [4] we proved the following theorem: Theorem 1.3.
Let d = 2 and p < . There exists a symmetric stationary ergodic environ-ment { µ e } e ∈ E with E ( µ pe ∨ µ − pe ) < ∞ and a sequence ε n → such that(a) the WFCLT holds for X ε n with limit W , i.e., for every T > and every bounded contin-uous function F on D T we have Ψ Fε n → Ψ FI as n → ∞ , in P -probability,but(b) the QFCLT does not hold for X ε n with limit Σ W for any Σ . In this paper we prove that for an environment similar to that in Theorem 1.3 the WFCLTholds for X ε as ε →
0, and not just along a subsequence.
NVARIANCE PRINCIPLE 3
Theorem 1.4.
Let d = 2 and p < . There exists a symmetric stationary ergodic environ-ment { µ e } e ∈ E with E ( µ pe ∨ µ − pe ) < ∞ such that(a) the WFCLT holds for X ε with limit W , i.e., for every T > and every bounded contin-uous function F on D T we have Ψ Fε → Ψ FI as ε → , in P -probability,but(b) the QFCLT does not hold for X ε with limit Σ W for any Σ . For more remarks on this problem see [4].
Acknowledgment.
We are grateful to Emmanuel Rio, Pierre Mathieu, Jean-DominiqueDeuschel and Marek Biskup for some very useful discussions.2.
Description of the environment
Here we recall the environment given in [4]. We refer the reader to that paper for proofsof some basic properties.Let Ω = (0 , ∞ ) E , and F be the Borel σ -algebra defined using the usual product topology.Then every t ∈ Z defines a transformation T t ( ω ) = ω + t of Ω. Stationarity and ergodicityof the measures defined below will be understood with respect to these transformations.All constants (often denoted c , c , etc.) are assumed to be strictly positive and finite. Fora set A ⊂ Z let E ( A ) ⊂ E be the set of all edges with both endpoints in A . Let E h ( A ) and E v ( A ) respectively be the set of horizontal and vertical edges in E ( A ). Write x ∼ y if { x, y } is an edge in Z . Define the exterior boundary of A by ∂A = { y ∈ Z − A : y ∼ x for some x ∈ A } . Let also ∂ i A = ∂ ( Z − A ) . Define balls in the (cid:96) ∞ norm by B ( x, r ) = { y : || x − y || ∞ ≤ r } ; of course this is just the squarewith center x and side 2 r .Let { a n } n ≥ , { β n } n ≥ and { b n } n ≥ be strictly increasing sequences of positive integersgrowing to infinity with n , with1 = a < b < β < a (cid:28) b < β < a (cid:28) b . . . We will impose a number of conditions on these sequences in the course of the paper. Wecollect the main ones here. There is some redundancy in the conditions, for easy reference.(i) a n is even for all n .(ii) For each n ≥ a n − divides b n , and b n divides β n and a n .(iii) b ≥ .(iv) a n / √ n ≤ b n ≤ a n / √ n for all n , and b n ∼ a n / √ n .(v) b n +1 ≥ n b n for all n .(vi) b n > a n − for all n .(vii) b n is large enough so that the estimates (5.1) and (6.1) of [4] hold.(viii) 100 b n < β n ≤ b n n / < β n < a n /
10 for n large enough.In addition, at various points in the proof, we will assume that a n is sufficiently muchlarger than b n − so that a process X ( n − defined below is such that for a ≥ a n the rescaledprocess ( a − X ( n − a t , t ≥ MARTIN BARLOW, KRZYSZTOF BURDZY AND AD ´AM TIM ´AR is sufficiently close to Brownian motion. We will mark the places in the proof where weimpose these extra conditions by ( ♣ ) .We begin our construction by defining a collection of squares in Z . Let B n = [0 , a n ] ,B (cid:48) n = [0 , a n − ∩ Z , S n ( x ) = { x + a n y + B (cid:48) n : y ∈ Z } . Thus S n ( x ) gives a tiling of Z by disjoint squares of side a n − a n . We say thatthe tiling S n − ( x n − ) is a refinement of S n ( x n ) if every square Q ∈ S n ( x n ) is a finite unionof squares in S n − ( x n − ). It is clear that S n − ( x n − ) is a refinement of S n ( x n ) if and only if x n = x n − + a n − y for some y ∈ Z .Take O uniform in B (cid:48) , and for n ≥ O n , conditional on ( O , . . . , O n − ), to be uniformin B (cid:48) n ∩ ( O n − + a n − Z ). We now define random tilings by letting S n = S n ( O n ) , n ≥ . Let η n , K n be positive constants; we will have η n (cid:28) (cid:28) K n . We define conductances on E as follows. Recall that a n is even, and let a (cid:48) n = a n . Let C n = { ( x, y ) ∈ B n ∩ Z : y ≥ x, x + y ≤ a n } . We first define conductances ν n, e for e ∈ E ( C n ). Let D n = (cid:8) ( a (cid:48) n − β n , y ) , a (cid:48) n − b n ≤ y ≤ a (cid:48) n + 10 b n (cid:9) ,D n = (cid:8) ( x, a (cid:48) n + 10 b n ) , ( x, a (cid:48) n + 10 b n + 1) , ( x, a (cid:48) n − b n ) , ( x, a (cid:48) n − b n − ,a (cid:48) n − β n − b n ≤ x ≤ a (cid:48) n − β n + b n (cid:9) . Thus the set D n ∪ D n resembles the letter I (see Fig. 1).For an edge e ∈ E ( C n ) we set ν n, e = η n if e ∈ E v ( D n ) ,ν n, e = K n if e ∈ E ( D n ) ,ν n, e = 1 otherwise.We then extend ν n, by symmetry to E ( B n ). More precisely, for z = ( x, y ) ∈ B n , let R z = ( y, x ) and R z = ( a n − y, a n − x ), so that R and R are reflections in the lines y = x and x + y = a n . We define R i on edges by R i ( { x, y } ) = { R i x, R i y } for x, y ∈ B n . We thenextend ν ,n to E ( B n ) so that ν ,ne = ν ,nR e = ν ,nR e for e ∈ E ( B n ). We define the obstacle set D n by setting D n = (cid:91) i =0 (cid:0) D ,in ∪ R ( D ,in ) ∪ R ( D ,in ) ∪ R R ( D ,in ) (cid:1) . Note that ν n, e = 1 for every edge adjacent to the boundary of B n , or indeed within a distance a n / e = ( x, y ), we will write e − z = ( x − z, y − z ). Next we extend ν n, to E by periodicity, i.e., ν n, e = ν n, e + a n x for all x ∈ Z . We define the conductances ν n bytranslation by O n , so that ν ne = ν n, e − O n , e ∈ E . NVARIANCE PRINCIPLE 5
Figure 1.
The set D n ∪ D n resembles the letter I. Blue edges have verylow conductance. The red line represents edges with very high conductance.Drawing not to scale.We also define the obstacle set at scale n by(2.1) D n = (cid:91) x ∈ Z ( a n x + O n + D n ) . We will sometimes call the set D n the set of n th level obstacles.We define the environment µ ne inductively by µ ne = ν ne if ν ne (cid:54) = 1 ,µ ne = µ n − e if ν ne = 1 . Once we have proved the limit exists, we will set(2.2) µ e = lim n µ ne . Lemma 2.1. (See [4, Theorem 3.1] ).(a) The environments ( ν ne , e ∈ E ) , ( µ ne , e ∈ E ) are stationary, symmetric and ergodic.(b) The limit (2.2) exists P –a.s.(c) The environment ( µ e , e ∈ E ) is stationary, symmetric and ergodic. Now let L n f ( x ) = (cid:88) y µ nxy ( f ( y ) − f ( x )) , (2.3)and X ( n ) be the associated Markov process. Set(2.4) η n = b − (1+1 /n ) n , n ≥ . From Section 4 of [4] we have:
Theorem 2.2.
For each n there exists a constant K n , depending on η , K , . . . η n − , K n − ,such that the QFCLT holds for X ( n ) with limit W . MARTIN BARLOW, KRZYSZTOF BURDZY AND AD ´AM TIM ´AR
For each n the process X ( n ) has invariant measure which is counting measure on Z . For x ∈ R and a > xa ] for the point in Z closest to xa . (We use some procedure tobreak ties.) We have the following bounds on the transition probabilities of X ( n ) from [5].We remark that the constant M n below is not effective – i.e. the proof does not give anycontrol on its value. Write k t ( x, y ) = (2 πt ) − exp( −| x − y | / t ) for the transition density ofBrownian motion in R , and p ω,nt ( x, y ) = P xω ( X ( n ) t = y )for the transition probabilities for X ( n ) . Lemma 2.3.
For each < δ < T there exists M n = M n ( δ, T ) such that for a ≥ M n (2.5) 12 k t ( x, y ) ≤ a p ω,na t ([ xa ] , [ ya ]) ≤ k t ( x, y ) for all δ ≤ t ≤ T, | x | , | y | ≤ T . Preliminary results
Since a proof of Theorem 1.3(b) was given in [4], all we need to prove is part (a) ofTheorem 1.4. The argument consists of several lemmas. We start with some preliminaryresults on weak convergence of probability measures on the space of c`adl`ag functions. Recallthe definitions of the measures P and P ω .Recall that D := D = D ([0 , , R ) denotes the space of c`adl`ag functions equipped withthe Skorokhod metric d S defined as follows (see [6, p. 111]). Let Λ be the family of continuousstrictly increasing functions λ mapping [0 ,
1] onto itself. In particular, λ (0) = 0 and λ (1) = 1.If x ( t ) , y ( t ) ∈ D thend S ( x, y ) = inf λ ∈ Λ max (cid:16) sup t ∈ [0 , | λ ( t ) − t | , sup t ∈ [0 , | y ( λ ( t )) − x ( t ) | (cid:17) . For x ( t ) ∈ D , let Osc( x, δ ) = sup {| x ( t ) − x ( s ) | : s, t ∈ [0 , , | s − t | ≤ δ } . Lemma 3.1.
Suppose that σ : [0 , → [0 , is continuous, non-decreasing and σ (0) = 0 (we do not require that σ (1) = 1 ). Suppose that | σ ( t ) − t | ≤ δ for all t ∈ [0 , . Let ε ≥ , δ > , x, y ∈ D with d S ( x ( · ) , y ( · )) ≤ ε , and Osc( x, δ ) ∨ Osc( y, δ ) ≤ δ . Then d S ( x ( σ ( · )) , y ( σ ( · ))) ≤ ε + 2 δ .Proof. For any ε > ε there exists λ ∈ Λ such that,max (cid:16) sup t ∈ [0 , | λ ( t ) − t | , sup t ∈ [0 , | y ( λ ( t )) − x ( t ) | (cid:17) ≤ ε . We have for λ satisfying the above condition,sup t ∈ [0 , | y ( σ ( λ ( t ))) − x ( σ ( t )) |≤ sup t ∈ [0 , ( | y ( σ ( λ ( t ))) − y ( λ ( t )) | + | y ( λ ( t )) − x ( t ) | + | x ( t ) − x ( σ ( t )) | ) ≤ Osc( y, δ ) + ε + Osc( x, δ ) ≤ ε + 2 δ . Hence, max (cid:16) sup t ∈ [0 , | λ ( t ) − t | , sup t ∈ [0 , | y ( σ ( λ ( t ))) − x ( σ ( t )) | (cid:17) ≤ ε + 2 δ . Taking infimum over all ε > ε we obtain d S ( x ( σ ( · )) , y ( σ ( · ))) ≤ ε + 2 δ . (cid:3) NVARIANCE PRINCIPLE 7
Let d denote the Prokhorov distance between probability measures on a probability spacedefined as follows (see [6, p. 238]). Recall that Ω = (0 , ∞ ) E and F is the Borel σ -algebradefined using the usual product topology. We will use measurable spaces ( D T , B ( D T )) and(Ω , F ) × ( D T , B ( D T )), for a fixed T (often T = 1). Note that D T and Ω × D T are metrizable,with the metrics generating the usual topologies. A ball around a set A with radius ε will bedenoted B ( A, ε ) in either space. For probability measures P and Q , d ( P, Q ) is the infimumof ε > P ( A ) ≤ Q ( B ( A, ε )) + ε and Q ( A ) ≤ P ( B ( A, ε )) + ε for all Borel sets A .Convergence in the metric d is equivalent to the weak convergence of measures. By abuse ofnotation we will sometimes write arguments of the function d ( · , · ) as processes rather thantheir distributions: for example we will write d ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , P BM ). We will use d for the Prokhorov distance between probability measures on (Ω , F ) × ( D T , B ( D T )). We willwrite d ω for the metric on the space ( D T , B ( D T )). It is straightforward to verify that if, forsome processes Y and Z , d ω ( Y, Z ) ≤ ε for P –a.a. ω , then d ( Y, Z ) ≤ ε .We will sometimes write W ( t ) = W t and similarly for other processes. Lemma 3.2.
There exists a function ρ : (0 , ∞ ) → (0 , ∞ ) such that lim δ ↓ ρ ( δ ) = 0 andthe following holds. Suppose that δ, δ (cid:48) ∈ (0 , and σ : [0 , → [0 , is a non-decreasingstochastic process such that t − σ t ∈ [0 , δ ] for all t , with probability greater than − δ (cid:48) .Suppose that { W t , t ≥ } has the distribution P BM and W ∗ t = W ( σ t ) for t ∈ [0 , . Then d ( { W ∗ t , t ∈ [0 , } , P BM ) ≤ ρ ( δ ) + δ (cid:48) .Proof. Suppose that
W, W ∗ and σ are defined on the sample space with a probability measure P . It is easy to see that we can choose ρ ( δ ) so that lim δ ↓ ρ ( δ ) = 0 and P (Osc( W, δ ) ≥ ρ ( δ )) <ρ ( δ ). Suppose that the event F := { Osc(
W, δ ) < ρ ( δ ) } ∩ {∀ t ∈ [0 ,
1] : t − σ t ∈ [0 , δ ] } holds.Then taking λ ( t ) = t ,d S ( W, W ∗ ) ≤ max (cid:16) sup t ∈ [0 , | λ ( t ) − t | , sup t ∈ [0 , | W ( λ ( t )) − W ∗ ( t ) | (cid:17) = sup t ∈ [0 , | W ( t ) − W ( σ ( t )) | ≤ Osc(
W, δ ) < ρ ( δ ) . We see that if F holds and W ∈ A ⊂ D then W ∗ ( · ) ∈ B ( A, ρ ( δ )). Since P ( F c ) ≤ ρ ( δ ) + δ (cid:48) ,we obtain P ( W ∈ A ) ≤ P ( { W ∈ A } ∩ F ) + P ( F c ) ≤ P ( { W ∗ ∈ B ( A, ρ ( δ )) } ∩ F ) + ρ ( δ ) + δ (cid:48) ≤ P ( W ∗ ∈ B ( A, ρ ( δ ))) + ρ ( δ ) + δ (cid:48) . Similarly we have P ( W ∗ ∈ A ) ≤ P ( W ∈ B ( A, ρ ( δ ))) + ρ ( δ ) + δ (cid:48) , and the lemma follows. (cid:3) Lemma 3.3.
Suppose that for some processes
X, Y and Z on the interval [0 , we have Z = X + Y and P (sup ≤ t ≤ | X t | ≤ δ ) ≥ − δ . Then d ( { Z t , t ∈ [0 , } , { Y t , t ∈ [0 , } ) ≤ δ .Proof. Suppose that the event F := { sup ≤ t ≤ | X t | ≤ δ } holds. Then taking λ ( t ) = t ,d S ( Z, Y ) ≤ max (cid:16) sup t ∈ [0 , | λ ( t ) − t | , sup t ∈ [0 , | Z ( λ ( t )) − Y ( t ) | (cid:17) = sup t ∈ [0 , | Z ( t ) − Y ( t ) | ≤ δ. MARTIN BARLOW, KRZYSZTOF BURDZY AND AD ´AM TIM ´AR
We see that if F holds and Z ∈ A ⊂ D then Y ( · ) ∈ B ( A, δ ). Since P ( F c ) ≤ δ , we obtain P ( Z ∈ A ) ≤ P ( { Z ∈ A } ∩ F ) + P ( F c ) ≤ P ( { Y ∈ B ( A, δ ) } ∩ F ) + δ ≤ P ( Y ∈ B ( A, δ )) + δ. Similarly we have P ( Y ∈ A ) ≤ P ( Z ∈ B ( A, δ )) + δ , and the lemma follows. (cid:3) Recall that the function e → µ ne is periodic with period a n . Hence the random field { µ ne } e ∈ E takes only finitely many values – this is a much stronger statement than the factthat µ ne takes only finitely many values.By Theorem 2.2 for each n ≥ a →∞ d ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , P BM ) = 0 . Thus ( ♣ ) we can take a n +1 so large that for every ω , n ≥ a ≥ a n +1 ,(3.1) d ω ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , P BM ) ≤ − n . Let θ denote the usual shift operator for Markov processes, that is, X ( n ) t ◦ θ s = X ( n ) t + s forall s, t ≥ X ( n ) is the canonical process on an appropriateprobability space). Recall that B ( x, r ) = { y : || x − y || ∞ ≤ r } denote balls in the (cid:96) ∞ norm in Z (i.e. squares), a (cid:48) n = a n / B n = [0 , a n ] and u n = ( a (cid:48) n , a (cid:48) n ). Note that u n is the center of B n . We choose β n so that b n n / < β n ≤ (cid:98) b n n / (cid:99) < β n < a n / , (3.2)and we assume that n is large enough so that the above inequalities hold. Let C n = { u n + O n + a n Z } be the set of centers of the squares in S n , and let(3.3) K ( r ) = (cid:91) z ∈ C n B ( z, r ) . Now let Γ n = K (2 β n ) , Γ n = Z \ K (4 β n ) . Now define stopping times as follows. S n = T n = 0 ,U nk = inf { t ≥ S nk − : X ( n ) t ∈ Γ n } , k ≥ ,S nk = inf { t ≥ U nk : X ( n ) t ∈ Γ n } , k ≥ ,V n = inf (cid:110) t ∈ (cid:91) k ≥ [ U nk , S nk ] : X ( n ) t ∈ X ( n ) ( T n ) + a n − Z (cid:111) ,T nk = inf { t ≥ V nk : X ( n ) t ∈ Γ n } , k ≥ ,V nk = V n ◦ θ T nk − , k ≥ . Let J = ∞ (cid:91) k =1 [ V nk , T nk ]; NVARIANCE PRINCIPLE 9 for t ∈ J the process X ( n ) is a distance at least β n away from any n th level obstacle. Nowset for t ≥ σ n, t = (cid:90) t J ( s ) ds = ∞ (cid:88) k =1 ( T nk ∧ t − V nk ∧ t ) ,σ n, t = t − σ n, t = ∞ (cid:88) k =0 (cid:0) V nk +1 ∧ t − T nk ∧ t (cid:1) . Let (cid:98) σ n,j denote the right continuous inverses of these processes, given by (cid:98) σ n,jt = inf { s ≥ σ n,js ≥ t } , j = 1 , . Finally let X n, t = X ( n )0 + (cid:90) t J ( s ) dX ( n ) s = X ( n )0 + ∞ (cid:88) k =0 (cid:0) X ( n ) ( T nk ∧ t ) − X ( n ) ( V nk ∧ t ) (cid:1) , (cid:98) X n, t = X ( n )0 + X n, ( (cid:98) σ n, t ) ,X n, t = X ( n )0 + (cid:90) t J c ( s ) dX ( n ) s = X ( n )0 + ∞ (cid:88) k =0 (cid:0) X ( n ) ( V nk +1 ∧ t ) − X ( n ) ( T nk ∧ t ) (cid:1) , (cid:98) X n, t = X ( n )0 + X n, ( (cid:98) σ n, t ) . The point of this construction is the following. For every fixed ω , the function e → µ n − e is invariant under the shift by xa n − for any x ∈ Z , and X ( n ) ( V nk +1 ) = X ( n ) ( T nk ) + xa n − forsome x ∈ Z . It follows that for each ω ∈ Ω, we have the following equality of distributions:(3.4) { (cid:98) X n, t , t ≥ } ( d ) = { X ( n − t , t ≥ } . The basic idea of the argument which follows is to write X ( n ) = X n, + X n. . By Theorem2.2, or more precisely by (3.1), the process X n, is close to Brownian motion, so to proveTheorem 1.4 we need to prove that X n, is small.We state the next lemma at a level of generality greater than what we need in this article.A variant of our lemma is in the book [1] but we could not find a statement that would matchperfectly our needs. Consider a finite graph G = ( V , E ) and suppose that for any edge xy , µ xy is a non-negative real number. Assume that (cid:80) y ∼ x µ xy > x . For f : V → R set E ( f, f ) = (cid:88) { x,y }∈ E µ xy ( f ( y ) − f ( x )) . Suppose that A , A ⊂ V , A ∩ A = ∅ , and let H = { f : V → R such that f ( x ) = 0 for x ∈ A , f ( y ) = 1 for y ∈ A } , r − = inf { E ( f, f ) : f ∈ H } . Thus r is the effective resistance between A and A . Let Z be the continuous time Markovprocess on V with the generator L given by L f ( x ) = (cid:88) y µ xy ( f ( y ) − f ( x )) . (3.5)Let T i = inf { t ≥ Z t ∈ A i } for i = 1 ,
2, and let Z ( i ) be Z killed at time T i . Lemma 3.4.
There exist probability measures ν on A and ν on A such that E ν T + E ν T = r | V | . Moreover, for i = 1 , , ν i is the capacitary measure of A i for the process Z (3 − i ) .Proof. Let h ( x ) = P x ( T < T ). Set D = V − A and recall that Z ( i ) is Z killed at time T i . Let G be the Green operator for Z (2) , and g ( x, y ) be the density of G with respect tocounting measure, so that E x T = (cid:88) y ∈ V g ( x, y ) . Note that g ( x, y ) = g ( y, x ). Let e be the capacitary measure of A for the process Z (2) .Then r − = (cid:80) z ∈ A e ( z ) , and h ( x ) = (cid:88) z ∈ A e ( z ) g ( z, x ) . So, if ν = r e , then (cid:88) y ∈ V h ( y ) = (cid:88) y ∈ V (cid:88) x ∈ A e ( x ) g ( x, y )= r − (cid:88) x ∈ A ν ( x ) (cid:88) y ∈ V g ( x, y )= r − (cid:88) x ∈ A ν ( x ) E x T = r − E ν T . Similarly if h ( x ) = P x ( T < T ) we obtain r − E ν T = (cid:80) y ∈ V h ( y ), and since h + h = 1,adding these equalities proves the lemma. (cid:3) Estimates on the process X n, In this section we will prove
Proposition 4.1.
For every δ > there exists n such that for all n ≥ n , u ≥ a n , and ω such that / ∈ Γ n \ ∂ i Γ n , P ω (cid:18) σ n, u /u ≤ δ, sup ≤ s ≤ u u − / | X n, s | ≤ δ (cid:19) ≥ − δ. (4.1)The proof requires a number of steps. We begin with a Harnack inequality. Lemma 4.2.
Let ≤ λ ≤ . There exist p > and n ≥ with the following properties.(a) Let x ∈ Z , let B = B ( x, λβ n ) and B = B ( x, (2 / λβ n ) . Let F be the event that X ( n ) makes a closed loop around B inside B − B before its first exit from B . If n ≥ n and NVARIANCE PRINCIPLE 11 D n ∩ B = ∅ then P yω ( F ) ≥ p for all y ∈ B .(b) Let h be harmonic in B . Then (4.2) max B h ≤ p − min B h. Proof. (a) Using ( ♣ ) and (3.1) we can make a Brownian approximation to β − n X ( n ) · which isgood enough so that this estimate holds.(b) Let y ∈ B be such that h ( y ) = max z ∈ B h ( z ). Then by the maximum principle thereexists a connected path γ from y to ∂ i B with h ( w ) ≥ h ( y ) for all w ∈ γ . Now let y (cid:48) ∈ B .On the event F the process X ( n ) must hit γ , and so we have h ( y (cid:48) ) ≥ P y (cid:48) ω ( F ) min γ h ≥ p h ( y ) , proving (4.2). (cid:3) Lemma 4.3.
For some n and c , for all n ≥ n , k ≥ , and ω such that / ∈ Γ n \ ∂ i Γ n , E ω ( U nk − S nk − | F S nk − ) ≤ c β n . (4.3) Proof.
Assume that ω is such that 0 / ∈ Γ n \ ∂ i Γ n . By the strong Markov property applied at S nk − for k >
1, it is enough to prove the Lemma for k = 1, that is that E xω ( U n ) ≤ c β n forall x / ∈ Γ n \ ∂ i Γ n . Let V = B ( u n + O n , β n + 1) ,A = ∂ i B ( u n + O n , (3 / β n ) ,A = ∂ i V ,A = ∂ i B ( u n + O n , β n ) T i = inf { t ≥ X ( n ) t ∈ A i } , i = 1 , , . Let Z be the continuous time Markov chain defined on V by (3.5), relative to the environment µ n . Note that the transition probabilities from x to one of its neighbors are the same for Z and X ( n ) if x is in the interior of V , i.e., x / ∈ ∂ i V ∪ ( Z \ V ). Note also that Z and X ( n − havethe same transition probabilities in the region between A and A . The expectations andprobabilities in this proof will refer to Z . By Lemma 3.4, there exists a probability measure ν on A such that E ν T ≤ r | V | . We have | V | ≤ c β n .To estimate r note that by the choice of the constants η n − and K n − in Theorem 2.2, theresistance (with respect to µ n − e ) between two opposite sides of any square in S n − will be1. It follows that the resistance between two opposite sides of any square side β n which is aunion of squares in S n − will also be 1. So, using Thompson’s principle as in [2] we deducethat r ≤ c .So, by Lemma 3.4 we have E ν T ≤ c β n . (4.4)We have for some c , p > n and x ∈ V \ B ( u n + O n , (3 / β n ), P xω ( T ∧ T ≤ c β n ) > p , because an analogous estimate holds for Brownian motion and ( ♣ ) we have (3.1). This anda standard argument based on the strong Markov property imply that for x ∈ A , E xω ( T ∧ T ) ≤ c β n . Now for y ∈ A and x ∈ V set ν x ( y ) = P xω ( X ( n ) ( T ∧ T ) = y ) . (Note that there exist x with (cid:80) y ∈ A ν x ( y ) < n ≥ n and x ∈ A , E xω ( T ) = E xω ( T ∧ T ) + E xω (( T − T ) T There exist c > and p < such that for all x, y ∈ Z , P xω (cid:0) R yn ≥ c b n (cid:1) ≤ p , (4.7) P xω (cid:32) sup ≤ t ≤ R yn | x − X ( n ) t | ≥ c b n (cid:33) ≤ p . (4.8) Proof. Recall that the family { µ n − x + · } x ∈ Z of translates of the environment µ n − · containsonly a finite number of distinct elements. Since each square in S n − contains one point in( y + a n − Z ), if b n /a n − is sufficiently large ( ♣ ) then using the transition density estimates(2.5) as well as (3.1), we obtain (4.7) and (4.8). (cid:3) Lemma 4.5. For some n and c , for all n ≥ n , k ≥ , and ω such that / ∈ Γ n \ ∂ i Γ n , E ω ( V nk − T nk − | F T nk − ) ≤ c b n n / . (4.9) Proof. Assume that ω is such that 0 / ∈ Γ n \ ∂ i Γ n . Let (cid:98) R nk = inf (cid:110) t ≥ U nk : X ( n ) t ∈ ( X ( n ) ( T n ) + a n − Z ) ∪ Γ n (cid:111) . Let F k = { (cid:98) R nk < S nk } and G k = (cid:84) kj =1 F cj . Since b n n / < β n for large n , we obtain from (4.8)and definitions of Γ n , Γ n , U nk and S nk that there exists p > x ∈ Γ n , P xω ( F k | F U nk ) > p . NVARIANCE PRINCIPLE 13 Hence, P xω ( G k ) < (1 − p ) k . (4.10)Note that if F k occurs then V n ≤ (cid:98) R nk . We have, using (4.3), (4.7) and (4.10), E ω ( V n − T n ) ≤ ∞ (cid:88) k =1 E ω (( U nk − S nk − ) G k − ) + ∞ (cid:88) k =1 E ω (( (cid:98) R nk − U nk ) G k − ) ≤ ∞ (cid:88) k =1 c β n (1 − p ) k − + ∞ (cid:88) k =1 c b n (1 − p ) k − ≤ c β n ≤ c b n n / . This proves the lemma for k = 1. The general case is obtained by applying this estimate tothe process shifted by T nk − ; in other words, by using the strong Markov property. (cid:3) Lemma 4.6. For every δ > there exists n such that for all n ≥ n , u ≥ a n , and ω suchthat / ∈ Γ n \ ∂ i Γ n , P ω (cid:0) σ n, u /u ≤ δ (cid:1) ≥ − δ/ . (4.11) Proof. Assume that ω is such that 0 / ∈ Γ n \ ∂ i Γ n . Fix an arbitrarily small δ > 0, consider u ≥ a n and let j ∗ = (cid:100) u/ ( b n n / ) (cid:101) . Then (4.9) implies that for some c and n , all n ≥ n , u ≥ a n , E ω (cid:32) j ∗ j ∗ (cid:88) j =1 V nj − T nj − (cid:33) ≤ c b n n / . Hence, for some n , all n ≥ n , u ≥ a n , P ω (cid:32) j ∗ j ∗ (cid:88) j =1 V nj − T nj − ≥ δb n n / (cid:33) ≤ δ/ , and, since j ∗ δb n n / ≤ δu , P ω (cid:32) j ∗ (cid:88) j =1 V nj − T nj − ≥ δu (cid:33) ≤ δ/ . (4.12)Recall K ( r ) from (3.3). Let (cid:98) V nk = inf { t ≥ V nk : X ( n ) t ∈ Z \ K ( b n n / ) } ∧ T nk , k ≥ , (cid:101) V nk = inf { t ≥ (cid:98) V nk : | X ( n ) t − X ( n ) ( (cid:98) V nk ) | ≥ (1 / b n n / } , k ≥ . We can use estimates for Brownian hitting probabilities ( ♣ ) to see that for some c , c and n , all n ≥ n , k , P ω ( (cid:98) V nk < T nk | F V nk ) ≥ c log(4 β n ) − log(2 β n )log(2 b n n / ) − log(2 β n ) ≥ c / log n. (4.13)There exist ( ♣ ) c and n , such that for all n ≥ n , k ≥ P ω ( T nk − V nk ≥ c b n n / | (cid:98) V nk < T nk , F (cid:98) V nk ) ≥ P ω ( (cid:101) V nk − (cid:98) V nk ≥ c b n n / | (cid:98) V nk < T nk , F (cid:98) V nk ) ≥ / . This and (4.13) imply that the sequence { T nk − V nk } k ≥ is stochastically minorized by asequence of i.i.d. random variables which take value c b n n / with probability c / log n andthey take value 0 otherwise. This implies that for some n , all n ≥ n , u ≥ a n , P ω (cid:32) j ∗ j ∗ (cid:88) j =2 T nj − V nj ≤ b n n / / log n (cid:33) ≤ δ/ j ∗ b n n / / log n ≥ u assuming n is large enough, P ω (cid:32) j ∗ (cid:88) j =2 T nj − V nj ≤ u (cid:33) ≤ δ/ . We combine this with (4.12) and the definition of σ n, u to obtain for some n , all n ≥ n , u ≥ a n , P ω ( σ n, u /u ≤ δ ) ≥ − δ/ . (4.14)This completes the proof of the lemma. (cid:3) Let Y nk = ( Y nk, , Y nk, ) = X ( n ) ( V nk +1 ) − X ( n ) ( T nk ). Set ¯ Y nk = sup T nk ≤ t ≤ V nk +1 | X ( n ) ( t ) − X ( n ) ( T nk ) | .For x ∈ Z , let Π n ( x ) ∈ B (cid:48) n − u n + O n be the unique point with the property that x − Π n ( x ) = a n y for some y ∈ Z .We next estimate the variance of X n, ( V nm +1 ) = (cid:80) mk =0 Y nk . Lemma 4.7. There exist c , c and n such that for all n ≥ n , k ≥ , j = 1 , , and ω , E ω | Y nk,j | ≤ E ω | Y nk | ≤ E ω | ¯ Y nk | ≤ c β n , (4.15) Var Y nk,j ≤ Var ¯ Y nk ≤ c β n , under P xω . (4.16) Proof. Let X ( n ) k ( t ) = X ( n ) t + Π n ( X ( n ) ( T nk )) − X ( n ) ( T nk ) , t ∈ [ T nk , V nk +1 ] , (4.17)and note that Y nk = ( Y nk, , Y nk, ) = X ( n ) k ( V nk +1 ) − X ( n ) k ( T nk ) . It follows from the definition that we have sup S nk − ≤ t ≤ U nk | X ( n ) ( t ) − X ( n ) ( S nk − ) | ≤ β n ,a.s. This, (4.8) and the definition of V nk +1 imply that | ¯ Y nk | is stochastically majorized by anexponential random variable with mean c β n . This easily implies the lemma. (cid:3) Next we will estimate the covariance of Y nk, and Y nj, for j (cid:54) = k . Lemma 4.8. There exist c , c and n such that for all n ≥ n , j < k − and ω such that / ∈ Γ n \ ∂ i Γ n , under P ω , Cov( Y nj, , Y nk, ) ≤ c e − c ( k − j ) β n . (4.18) Proof. Assume that ω is such that 0 / ∈ Γ n \ ∂ i Γ n . LetΓ n = Γ n ∩ B ( u n + O n , a n / 2) = B ( u n + O n , β n ) , Γ n = ∂ i B ( u n + O n , β n ) ,τ ( A ) = inf { t ≥ X ( n )0 ( t ) ∈ A } . NVARIANCE PRINCIPLE 15 Suppose that x, v ∈ Γ n and y ∈ Γ n . By the Harnack inequality proved in Lemma 4.2, P xω ( X ( n )0 ( τ (Γ n )) = y ) P vω ( X ( n )0 ( τ (Γ n )) = y ) ≥ c . (4.19)Let T nk have the same meaning as T nk but relative to the process X ( n ) k rather than X ( n ) . Weobtain from (4.19) and the strong Markov property applied at τ (Γ n ) that, for any x, v, y ∈ Γ n we have P xω ( X ( n )0 ( T n ) = y ) P vω ( X ( n )0 ( T n ) = y ) ≥ c . Recall that T n = 0. The last estimate implies that, for x, v, y ∈ Γ n , P ω ( X ( n )1 ( T n ) = y | X ( n )0 ( T n ) = x ) P ω ( X ( n )1 ( T n ) = y | X ( n )0 ( T n ) = v ) ≥ c . Since the process X ( n ) is time-homogeneous, this shows that for x, v, y ∈ Γ n and all k , P ω ( X ( n ) k +1 ( T nk +1 ) = y | X ( n ) k ( T nk ) = x ) P ω ( X ( n ) k +1 ( T nk +1 ) = y | X ( n ) k ( T nk ) = v ) ≥ c . (4.20)We now apply Lemma 6.1 of [8] (see Lemma 1 of [7] for a better presentation of the sameestimate) to see that (4.20) implies that there exist constants C k , k ≥ 1, such that for every k and all x, v, y ∈ Γ n , P xω ( X ( n ) k ( T nk ) = y ) P vω ( X ( n ) k ( T nk ) = y ) ≥ C k . Moreover, C k ∈ (0 , C k ’s depend only on c , and 1 − C k ≤ e − c k for some c > k .By time homogeneity of X ( n ) , for m ≤ j < k and all x, v, y, z ∈ Γ n , P zω ( X ( n ) k ( T nk ) = y | X ( n ) j ( T nj ) = x ) P zω ( X ( n ) k ( T nk ) = y | X ( n ) j ( T nj ) = v ) ≥ C k − j , and, by the strong Markov property applied at T nj , P zω ( X ( n ) k ( T nk ) = y | X ( n ) j ( T nj ) = x ) P zω ( X ( n ) k ( T nk ) = y | X ( n ) m ( T nm ) = v ) ≥ C k − j . This and (4.15) imply that for j < k − x ∈ Z , | E xω ( Y nk, − E xω Y nk, | F T nj +1 ) | = | E xω ( Y nk, | F T nj +1 ) − E xω Y nk, |≤ (1 − C k − j − ) sup y ∈ Z E yω | Y nk, |≤ e − c ( k − j − c β n ≤ c e − c ( k − j ) β n . (4.21) Hence for j < k − Y nj, , Y nk, ) = E xω (( Y nj, − E xω Y nj, )( Y nk, − E xω Y nk, ))= E xω ( E xω (( Y nj, − E xω Y nj, )( Y nk, − E xω Y nk, ) | F T nj +1 ))= E xω (( Y nj, − E xω Y nj, ) E xω ( Y nk, − E xω Y nk, | F T nj +1 )) ≤ E xω ( | Y nj, − E xω Y nj, | · | E xω ( Y nk, − E xω Y nk, | F T nj +1 ) | ) ≤ E xω | Y nj, | c e − c ( k − j ) β n ≤ c e − c ( k − j ) β n . (cid:3) Proof of Proposition 4.1. Assume that ω is such that 0 / ∈ Γ n \ ∂ i Γ n . We combine (4.18) and(4.16) to see that for some c and c and all m ≥ 1, we have under P ω ,Var (cid:32) m (cid:88) k =0 Y nk, (cid:33) = m (cid:88) j =0 m (cid:88) k =0 Cov( Y nj, , Y nk, )(4.22) ≤ m (cid:88) j =0 m (cid:88) k =0 c e − c ( k − j ) β n ≤ c mβ n . For fixed n and ω , the process { X ( n ) k ( T nk ) , k ≥ } is Markov with a finite state space andone communicating class, so it has a unique stationary distribution. We will call it p ( n ).We will argue that E p ( n ) ω Y nk, = 0. Since X ( n ) and X ( n − satisfy the quenched invarianceprinciple and they are random walks among symmetric (in distribution) conductances, theyhave zero means. Recall that X ( n ) = X n, + X n, and (cid:98) X n, has the same distribution as X ( n − . It follows that for some c > c < / t , we have P p ( n ) ω (cid:18) sup ≤ s ≤ t | (cid:98) X n, s | ≥ c √ t (cid:19) = P p ( n ) ω (cid:18) sup ≤ s ≤ t | X ( n − s | ≥ c √ t (cid:19) < c . Since (cid:98) X n, t = X n, ( (cid:98) σ n, t ) and (cid:98) σ n, t ≥ t , the last estimate implies that P p ( n ) ω (cid:18) sup ≤ s ≤ t | X n, s | ≥ c √ t (cid:19) < c . We also have for some c > c < / 4, and all large t , P p ( n ) ω (cid:18) sup ≤ s ≤ t | X ( n ) s | ≥ c √ t (cid:19) < c . Since X n, = X ( n ) − X n, , we obtain for some c > c < / t , P p ( n ) ω (cid:18) sup ≤ s ≤ t | X n, s | ≥ c √ t (cid:19) < c . This shows that X n, does not have a linear drift. It is clear from the law of large numbersthat lim inf t →∞ σ n, t /t > 0, so (cid:98) X n, does not have a linear drift either. We conclude that E p ( n ) ω Y nk, = 0. NVARIANCE PRINCIPLE 17 Now suppose that X ( n )0 does not necessarily have the distribution p ( n ). The fact that E p ( n ) ω Y nk, = 0 and a calculation similar to that in (4.21) imply that, | E ω Y nk, | ≤ c e − c k β n . Let c be the constant denoted c in (4.15). The last estimate and (4.15) imply that forsome c and all m ≥ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E ω m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:88) k ≥ | E ω Y nk, | + sup k ≥ E ω | ¯ Y nk |≤ (cid:88) k ≥ c e − c k β n + c β n ≤ c β n . (4.23)All estimates that we derived for Y nk, ’s apply to Y nk, ’s as well, by symmetry.Note that | X ( n ) ( U nk +1 ) − X ( n ) ( T nk ) | ≥ β n / 2. We have V nk +1 − T nk ≥ U nk +1 − T nk so we canassume ( ♣ ) that b n /a n − is so large that for some p > n , for all n ≥ n and k ≥ P xω ( V nk +1 − T nk ≥ β n | F T nk ) ≥ p . Let V m be a binomial random variable with parameters m and p . We see that σ n, ( V nm ) = (cid:80) mk =0 V nk +1 − T nk is stochastically minorized by β n V m .Recall that u ≥ a n . Let m be the smallest integer such that P ω ( V nm ≤ u ) < δ/ . (4.24)Then P ω ( V nm − ≤ u ) ≥ δ/ . (4.25)Since δ in (4.14) can be arbitrarily small, we have for for some n and all n ≥ n , P ω ( σ n, u /u ≤ δ ) ≥ − δ/ . (4.26)The following estimate follows from the fact that σ n, ( V nm − ) is stochastically minorized by β n V m − , and from (4.25)-(4.26), P ω ( β n V m − ≤ δ u ) ≥ P ω ( σ n, ( V nm − ) ≤ δ u ) ≥ P ω ( σ n, u ≤ δ u, V nm − ≤ u ) ≥ δ/ . This implies that for some c , we have m ≤ c δ u/β n . In other words, u ≥ m β n / ( c δ ).Note that for a fixed δ , we have for large n , ( ♣ ) u / δ/ − c β n ≥ u / δ/ 8. These observations, (4.22), (4.23) and the Chebyshev inequality imply that for m ≤ m , P ω (cid:32) u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≥ δ/ (cid:33) (4.27) ≤ P ω (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ u / δ/ (cid:33) + P ω (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ u / δ/ (cid:33) ≤ P ω (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, − E ω m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ u / δ/ − c β n (cid:33) + P ω (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, − E ω m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ u / δ/ − c β n (cid:33) ≤ Var (cid:0)(cid:80) mk =0 Y nk, (cid:1) uδ / 64 + Var (cid:0)(cid:80) mk =0 Y nk, (cid:1) uδ / ≤ c m β n ( c − δ − m β n ) δ / ≤ c δ. Let M = min { m ≥ u − / (cid:0)(cid:12)(cid:12)(cid:80) mk =0 Y nk, (cid:12)(cid:12) + (cid:12)(cid:12)(cid:80) mk =0 Y nk, (cid:12)(cid:12)(cid:1) ≥ δ } . By the strong Markovproperty applied at M and (4.27), P ω (cid:32) sup ≤ m ≤ m u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≥ δ, u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≤ δ/ (cid:33) (4.28) ≤ P ω (cid:32) u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − M (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − M (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≥ δ/ | M < m (cid:33) ≤ c δ. Recall that u ≥ m β n / ( c δ ). For a fixed δ and large n , ( ♣ ) u / δ − c β n ≥ u / δ/ 2. Itfollows from this, (4.15) and (4.16) that P ω (cid:0) ∃ k ≤ m : | ¯ Y nk | ≥ u / δ (cid:1) ≤ m sup k ≤ m P ω (cid:0) | ¯ Y nk | ≥ u / δ (cid:1) (4.29) ≤ m sup k ≤ m P ω (cid:0) | ¯ Y nk | − E ω | ¯ Y nk | ≥ u / δ − c β n (cid:1) ≤ m c β n uδ / ≤ m c β n ( c − δ − m β n ) δ ≤ c δ. NVARIANCE PRINCIPLE 19 We use (4.24), (4.27), (4.28) and (4.29) to obtain P ω (cid:18) sup ≤ s ≤ u u − / | X n, s | ≥ δ (cid:19) ≤ P ω ( V nm ≤ u ) + P ω (cid:32) u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≥ δ/ (cid:33) + P ω (cid:32) sup ≤ m ≤ m u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≥ δ, u − / (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) k =0 Y nk, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) ≤ δ/ (cid:33) + P ω (cid:0) ∃ k ≤ m : | ¯ Y nk | ≥ u / δ (cid:1) ≤ δ/ c δ + c δ + c δ. Since δ > δ > 0, some n and all n ≥ n , P ω (cid:18) sup ≤ s ≤ u u − / | X n, s | ≥ δ (cid:19) ≤ δ/ . This and (4.11) yield the proposition. (cid:3) Recall from (1.2) the definition of the averaged measure P . Lemma 4.9. For every δ > there exists n such that for all n ≥ n and u ≥ a n , P (cid:18) σ n, u /u ≤ δ, sup ≤ s ≤ u u − / | X n, s | ≤ δ (cid:19) ≥ − δ. (4.30) Proof. By Proposition 4.1 applied to δ/ δ , for every δ > n suchthat for all n ≥ n , u ≥ a n , and ω such that 0 / ∈ Γ n \ ∂ i Γ n , P ω (cid:18) σ n, u /u ≤ δ, sup ≤ s ≤ u u − / | X n, s | ≤ δ (cid:19) ≥ − δ/ . (4.31)Let | A | denote the cardinality of A ⊂ Z . Since | Γ n | ≤ β n ≤ a n n − / = 25 n − / | B (cid:48) n | ,the definitions of O n and Γ n imply that P (0 ∈ Γ n \ ∂ i Γ n ) < δ/ n ≥ n and all n ≥ n . This and (4.31) imply (4.30). (cid:3) In the following lemma and its proof, when we write the Prokhorov distance between pro-cesses such as { (1 /a ) X ( n − ta , t ∈ [0 , } , we always assume that they are distributed accordingto P . Lemma 4.10. There exists a function ρ ∗ : (0 , ∞ ) → (0 , ∞ ) with lim δ ↓ ρ ∗ ( δ ) = 0 and asequence { a n } with the following properties, d ( { (1 /a ) X ( n − ta , t ∈ [0 , } , P BM ) ≤ − n , a ≥ a n . (4.32) Moreover, suppose that for δ < / and all u ≥ a n , P (cid:18) σ n, u /u ≤ δ, sup ≤ s ≤ u u − / | X n, s | ≤ δ (cid:19) ≥ − δ. (4.33) Then d ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , P BM ) ≤ − n + ρ ∗ ( δ ) , for all a ≥ a n . Proof. Formula (4.32) is special case of (3.1).Fix some a ≥ a n . We will apply (4.33) with u = a . Note that on the event in (4.33) wehave 1 − σ n, a /a = u/u − σ n, u /u = σ n, u /u ≤ δ. (4.34)The function t → σ n, ta /a is Lipschitz with the constant 1 and σ n, ta /a ≤ t so (4.34) impliesfor t ∈ [0 , t − σ n, ta /a ≤ − σ n, a /a ≤ δ. (4.35)Recall the function ρ ( δ ) from the proof of Lemma 3.2, such that P BM (Osc( W, δ ) ≥ ρ ( δ )) <ρ ( δ ) and lim δ ↓ ρ ( δ ) = 0. By (4.35), we can apply Lemma 3.2 with σ t = σ n, ta /a . Recall that W ∗ ( t ) = W ( σ t ). By the definition of (cid:98) X n, , d ( { (1 /a ) X n, ta , t ∈ [0 , } , P BM ) ≤ d ( { (1 /a ) X n, t/a , t ∈ [0 , } , { W ∗ t , t ∈ [0 , } ) + d ( { W ∗ t , t ∈ [0 , } , P BM ) ≤ d ( { (1 /a ) X n, ta , t ∈ [0 , } , { W ∗ t , t ∈ [0 , } ) + ρ ( δ ) + δ = d ( { (1 /a ) (cid:98) X n, ( σ n, ta ) , t ∈ [0 , } , { W ( σ n, ta /a ) , t ∈ [0 , } ) + ρ ( δ ) + δ. (4.36)Recall from (3.4) that for a fixed ω ∈ Ω, the distribution of { (cid:98) X n, t , t ≥ } is the sameas that of { X n − t , t ≥ } . In view of Theorem 2.2, we can make a n so large ( ♣ ) that P (Osc( (cid:98) X n, , δ ) ≥ ρ ( δ )) < ρ ( δ ). This, Lemma 3.1 and the definition of the Prokhorovdistance imply that d ( { (1 /a ) (cid:98) X n, ( σ n, ta ) , t ∈ [0 , } , { W ( σ n, ta /a ) , t ∈ [0 , } ) ≤ d ( { (1 /a ) (cid:98) X n, ta , t ∈ [0 , } , { W t , t ∈ [0 , } ) + 4 ρ ( δ )= d ( { (1 /a ) X ( n − ta , t ∈ [0 , } , { W t , t ∈ [0 , } ) + 4 ρ ( δ ) ≤ − n + 4 ρ ( δ ) . In the final two lines line we used (3.4) and (4.32).Combining the estimates above, since P ω (cid:0) sup ≤ s ≤ u u − / | X n, s | ≤ δ (cid:1) ≥ − δ and X ( n ) = X n, + X n, , Lemma 3.3 shows that d ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , P BM ) ≤ d ( { (1 /a ) X ( n ) ta , t ∈ [0 , } , { (1 /a ) X n, ta , t ∈ [0 , } )+ d ( { (1 /a ) X n, ta , t ∈ [0 , } , P BM ) ≤ δ + 2 − n + 5 ρ ( δ ) + δ. We conclude that the lemma holds if we take ρ ∗ ( δ ) = 5 ρ ( δ ) + 2 δ . (cid:3) Proof of Theorem 1.4. Choose an arbitrarily small ε > 0. We will show that there exists a ∗ such that for every a ≥ a ∗ , d ( { (1 /a ) X ta , t ∈ [0 , } , P BM ) ≤ ε. (4.37)Recall ρ ∗ from Lemma 4.10. Let n be such that 2 − n ≤ ε/ δ > − n + ρ ∗ ( δ ) < ε/ 2. Let n be defined as n in Lemma 4.9, relative to this δ . Then, according NVARIANCE PRINCIPLE 21 to Lemma 4.10, d ( { (1 /a ) X nta , t ∈ [0 , } , P BM ) ≤ − n + ρ ∗ ( δ ) < ε/ , (4.38)for all n ≥ n := n ∨ n and a ≥ a n .For a set K let B ( K, r ) = { z : dist( z, K ) < r } and recall the definition of D n given in(2.1). Let F = { ∈ B ( D n +1 , a n +1 / log( n + 1)) } ,F = { / ∈ B ( D n +1 , a n +1 / log( n + 1)) } ∩ {∃ t ∈ [0 , a n +1 ] : X ( n ) t ∈ D n +1 } ,G k = { ∈ B ( D k , b k /k ) } , k > n + 1 ,G k = { / ∈ B ( D k , b k /k ) } ∩ {∃ t ∈ [0 , a n +1 ] : X ( n ) t ∈ D k } , k > n + 1 . The area of B ( D n +1 , a n +1 / log( n + 1)) is bounded by c ( a n +1 / log( n + 1)) so P ( F ) ≤ c ( a n +1 / log( n + 1)) /a n +1 = c / log ( n + 1) . (4.39)We choose n > n such that c / log ( n + 1) < ε/ n ≥ n .Note that D n +1 is a subset of a square with side 4 β n +1 ≤ a n +1 n − / . This easily impliesthat there exists n ≥ n such that for n ≥ n , P BM (cid:0) ∃ t ∈ [0 , a n +1 ] : W ( t ) ∈ D n +1 | / ∈ B ( D n +1 , a n +1 / log( n + 1)) (cid:1) ≤ ε/ . We can assume ( ♣ ) that a n +1 /a n is so large that for some n ≥ n and all n ≥ n , P ( F ) ≤ P (cid:16) ∃ t ∈ [0 , a n +1 ] : X ( n ) t ∈ D n +1 | / ∈ B ( D n +1 , a n +1 / log( n + 1)) (cid:17) (4.40) ≤ ε/ . (4.41)The area of B ( D k , b k /k ) is bounded by c b k /k so P ( G k ) ≤ ( c b k /k ) /a k ≤ c ( b k /k ) / ( kb k ) = c /k . (4.42)We let n > n be so large that (cid:80) k ≥ n c /k < ε/ 8. For all k > n + 1 ≥ n + 1, we make b k /k so large ( ♣ ) that P ( G k ) ≤ P (cid:32) sup t ∈ [0 ,a n +1 ] | X nt | ≥ b k /k (cid:33) ≤ c /k . (4.43)We combine (4.39), (4.40), (4.42) and (4.43) to see that for n ≥ n , P ( ∃ t ∈ [0 , a n +1 ] ∃ k ≥ n + 1 : X ( n ) t ∈ D k )(4.44) ≤ P ( F ) + P ( F ) + (cid:88) k>n +1 P ( G k ) + (cid:88) k>n +1 P ( G k ) ≤ ε/ ε/ ε/ ε/ ε/ . Let R n +1 = inf { t ≥ X t ∈ (cid:83) k ≥ n +1 D k } . It is standard to construct X and X ( n ) on acommon probability space so that X t = X nt for all t ∈ [0 , R n +1 ). This and (4.44) imply thatfor n ≥ n and all a ∈ [ a n , a n +1 ] we have P ( ∃ t ∈ [0 , 1] : (1 /a ) X ta (cid:54) = (1 /a ) X ( n ) ta ) ≤ ε/ . We combine this with (4.38) to see that for all a ≥ a n , d ( { (1 /a ) X ta , t ∈ [0 , } , P BM ) ≤ ε/ ε/ ε. We conclude that (4.37) holds with a ∗ = a n .This completes the proof of AFCLT. The WFCLT then follows from Theorem 2.13 of[4]. (cid:3) References [1] D. Aldous and J. Fill, Reversible Markov Chains and Random Walks on Graphs (book in preparation,available online) [2] M. T. Barlow and R. F. Bass. On the resistance of the Sierpinski carpet. Proc. R. Soc. London A. (1990) 345-360.[3] M.T. Barlow and J.-D. Deuschel. Invariance principle for the random conductance model with unboundedconductances. Ann. Probab. (2010), 234-276[4] M.T. Barlow, K. Burdzy, ´A. Tim´ar. Comparison of quenched and annealed invariance principles forrandom conductance model. Preprint 2013. Math arXiv 1304.3498 .[5] M.T. Barlow, X. Zheng. The random conductance model with Cauchy tails. Ann. Applied Probab. (2010), 869–889.[6] P. Billingsley, Convergence of probability measures . Second edition. Wiley Series in Probability andStatistics: Probability and Statistics. A Wiley-Interscience Publication. John Wiley & Sons, Inc., NewYork, 1999.[7] K. Burdzy and D. Khoshnevisan, Brownian motion in a Brownian crack Ann. Appl. Probab. (1998),708–748.[8] K. Burdzy, E. Toby and R.J. Williams, On Brownian excursions in Lipschitz domains. Part II. Localasymptotic distributions, in Seminar on Stochastic Processes 1988 (E. Cinlar, K.L. Chung, R. Getoor,J. Glover, editors), 1989, 55–85, Birkh¨auser, Boston.[9] A. De Masi, P.A. Ferrari, S. Goldstein, W.D. Wick. An invariance principle for reversible Markovprocesses. Applications to random motions in random environments. J. Statist. Phys. (1989), 787–855.(1989), 787–855.