The velocity of 1D Mott variable range hopping with external field
aa r X i v : . [ m a t h . P R ] M a y THE VELOCITY OF 1D MOTT VARIABLE RANGE HOPPING WITHEXTERNAL FIELD
ALESSANDRA FAGGIONATO, NINA GANTERT, AND MICHELE SALVI
Abstract.
Mott variable range hopping is a fundamental mechanism for low–temperatureelectron conduction in disordered solids in the regime of Anderson localization. In amean field approximation, it reduces to a random walk (shortly, Mott random walk) ona random marked point process with possible long-range jumps.We consider here the one-dimensional Mott random walk and we add an external field(or a bias to the right). We show that the bias makes the walk transient, and investigateits linear speed. Our main results are conditions for ballisticity (positive linear speed)and for sub-ballisticity (zero linear speed), and the existence in the ballistic regime ofan invariant distribution for the environment viewed from the walker, which is mutuallyabsolutely continuous with respect to the original law of the environment. If the pointprocess is a renewal process, the aforementioned conditions result in a sharp criterion forballisticity. Interestingly, the speed is not always continuous as a function of the bias.
Keywords : random walk in random environment, disordered media, ballisticity, environ-ment viewed from the walker, electron transport in disordered solids.
AMS Subject Classification : 60K37, 82D30, 60G50, 60G55. Introduction
Mott variable range hopping is a fundamental mechanism at the basis of low–temperatureelectron conduction in disordered solids (e.g. doped semiconductors) in the regime of An-derson localization (see [2, 18, 19, 21, 24]). By localization, and using a mean–field ap-proximation, Mott variable range hopping can be described by a suitable random walk( Y t ) t ≥ in a random environment ω . The environment ω is given by a marked simplepoint process { ( x i , E i ) } i ∈ Z with law P . The sites x i ∈ R d correspond to the points in thedisordered solid around which the conduction electrons are localized, and E i ∈ [ − A, A ]is the ground energy of the associated localized wave function. The random walk Y t hasstate space { x i } and can jump from a site x i to any other site x k = x i with probabilityrate r x i ,x k ( ω ) := exp {−| x i − x k | − β ( | E i | + | E k | + | E i − E k | ) } ,β being the inverse temperature.We refer to [5, 6, 7, 13, 14] for rigorous results on the random walk Y t , including thestretched exponential decay of the diffusion matrix as β → ∞ in accordance with thephysical Mott law for d ≥
2. Here we focus on the one-dimensional case, i.e. { x i } i ∈ Z ⊂ R (we order the sites x i ’s in increasing order, with x = 0), and study the effect of applyingan external field. This corresponds to modifying the above jump rates r x i ,x k ( ω ) by afactor e λ ( x k − x i ) , where λ ∈ (0 ,
1) has to be interpreted as the intensity of the externalfield. Moreover, we generalize the form of the jump rates, finally taking r λx i ,x k ( ω ) := exp {−| x i − x k | + λ ( x k − x i ) + u ( E i , E k ) } , with u a symmetric bounded function. For simplicity, we keep the same notation Y t forthe resulting random walk starting at the origin. Under rather weak assumptions on the environment, we will show that Y t is a.s. transientfor almost every environment ω (cf. Theorem 1–(i)). In the rest of Theorem 1 we give twoconditions in terms of the exponential moments of the inter–point distances, both assuringthat the asymptotic velocity v Y ( λ ) := lim t →∞ Y t t is well defined and almost surely constant,that is, it does not depend on the realization of ω . Call E the expectation with respect to P . The first condition, namely E (cid:2) e (1 − λ )( x − x ) (cid:3) < ∞ and u continuous, implies ballisticity,i.e. v Y ( λ ) >
0. The second condition, namely E (cid:2) e (1 − λ )( x − x ) − (1+ λ )( x − x − ) (cid:3) = ∞ , impliessub-ballisticity, i.e. v Y ( λ ) = 0. In particular, if the points { x i } i ∈ Z are given by a renewalprocess, our two conditions give a sharp dichotomy (when u is continuous). We point outthat there are cases in which v Y ( λ ) is not continuous in λ (see Example 2 in Subsection2.2).Under the condition leading to ballisticity we also show that the Markov process givenby the environment viewed from the walker admits a stationary ergodic distribution Q ∞ ,which is mutually absolutely continuous to the original law P of the environment. Moreoverwe give upper and lower bounds on the Radon–Nikodym derivative d Q ∞ d P , showing that itis in L ( P ), and we characterize the asymptotic velocity as the expectation of the localdrift with respect to the measure Q ∞ (cf. Theorem 2).The study of ballisticity for the Mott random walk is the first fundamental step towardsproving the Einstein Relation, which states the proportionality of diffusivity and mobilityof the process (see e.g. [16]). Among other important applications, the Einstein Relationwould allow to conclude the proof of the physical Mott law, which was originally statedfor the mobility of the process and has only been proved for its diffusivity (see [5], [13]and [14]). The Einstein Relation will be addressed in future work.The techniques used to prove ballisticity and sub-ballisticity are different. In order tocomment them it is convenient to refer to the discrete–time random walk ( X n ) n ∈ N on Z such that X n = i if after n jumps the random walk Y t is at site x i . Due to our assumptionson the environment, the ballistic/sub-ballistic behavior of ( Y t ) t ≥ is indeed the same asthat of ( X n ) n ∈ N , and therefore we focus on the latter.We first comment the ballistic regime. Considering first a generic random walk on Z starting at the origin and a.s. transient to the right, ballisticity is usually derived byproving a law of large numbers (LLN) for the hitting times ( T n ) n ≥ , where T n is thefirst time the random walk reaches the half-line [ n, + ∞ ). In the case of nearest neighborrandom walks, T n is simply the hitting time of n , and considering an ergodic environmentone can derive the LLN for ( T n ) n ≥ by showing that the sequence ( T n +1 − T n ) n ≥ isstationary and mixing for the annealed law as in [1, 25]. This technique cannot be appliedin the present case, since our random walk has infinite range and much information aboutthe environment to the right is known, when a site in [ n, + ∞ ) is visited for the firsttime. A very useful tool is the method developed in [8] where the authors have studiedballisticity for a class of random walks on Z with arbitrarily long jumps. Their strategy is asfollows. First one introduces for any positive integer ρ a truncated random walk obtainedfrom the original one by forbidding all jumps of length larger than ρ . The ergodicityof the environment and the finite range of the jumps allow to introduce a regenerativestructure related to the times T ρn , and to analyze the asymptotic behavior of the ρ –truncated random walk. In particular, one proves that the environment viewed from the ρ –truncated random walk admits a stationary ergodic distribution Q ρ which is mutuallyabsolutely continuous to the original law of the environment. A basic ingredient here is the We use the convention N + := { , , . . . } and N := { , , , . . . } D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 3 theory of cycle–stationarity and cycle–ergodicity (cf. [27, Chapter 8] and [9] for an examplein a simplified setting). Finally, one proves that the sequence ( Q ρ ) ρ ∈ N + converges weaklyto a probability distribution Q ∞ , which is indeed a stationary and ergodic distribution forthe environment viewed from the random walker ( X n ) n ∈ N and is also mutually absolutelycontinuous to the law of the environment P . Since, as usual, the random walk can bewritten as an additive functional of the environment viewed from the random walker, onecan apply Birkhoff’s ergodic theorem and use the ergodicity of Q ∞ to get the strong LLNfor the random walk (hence its asymptotic velocity) for Q ∞ –a.e. environment. Using thefact that P ≪ Q ∞ , the above strong LLN holds for P –a.e. environment, too. Finally, sincethe velocities of the ρ –truncated walks are uniformly bounded from below by a strictlypositive constant and since they converge to the velocity of ( X n ) n ∈ N when ρ → ∞ , weobtain a ballistic behavior.To analyze ballisticity we have used the same method as in [8], although one cannotapply [8, Theorems 2.3, 2.4] directly to the present case, since some hypotheses are notsatisfied in our context. In particular, in [8] three conditions (called E, C, D) are assumed,and only condition C is satisfied by our model. By means of estimates based on capacitytheory, we are able to extend the method developed in [8] to the present case.We now move to sub-ballisticity (the regime of zero velocity is not covered in [8] andour method could be in principle applied to random walks on Z with arbitrarily longjumps). We define a coupling between the random walk ( X n ) n ≥ , a sequence of suitable N + –valued i.i.d. random variables ξ , ξ , . . . with finite mean, and an ergodic sequence ofrandom variables S , S , . . . with the following properties: Fix ω and call now T k +1 thefirst time the random walk overjumps the point ξ + · · · + ξ k . S k is a geometric randomvariable of parameter s k = s ( τ ξ + ··· + ξ k ω ), where τ · is the usual shift and s a deterministicfunction. The coupling guarantees that X T k +1 does not exceed ξ + · · · + ξ k + ξ k +1 andalso ensures that the time T k +1 − T k is larger than S k . Notice that X n n ≤ X T k +1 T k ≤ ξ + · · · + ξ k +1 S + S + · · · + S k if T k ≤ n < T k +1 , (1)and therefore the sub-ballisticity of ( X n ) n ≥ follows from the LLN for ( ξ k ) k ∈ N + and theLLN for ( S k ) k ∈ N + , since our assumption E (cid:2) e (1 − λ )( x − x ) − (1+ λ )( x − x − ) (cid:3) = ∞ implies that E [1 /s ( ω )] = + ∞ .1.1. Outline.
In Section 2 we rigorously introduce the (perturbed) Mott random walkin its continuous and discrete-time versions. Theorem 1 states the transience to the rightand gives conditions implying ballisticity or subballisticity. Theorem 2, deals with theRadon-Nikodym derivative of the invariant measure for the environment viewed from thewalker with respect to the original law P of the environment and gives a characterizationof the limiting speed of the walk. Subsection 2.1 comments the assumptions we made forthe Theorems, while two important (counter-)examples can be found in Subsection 2.2.In Section 3 we collect results on the ρ –truncated walks. Estimates of the effectiveconductances induced by these walks and of the time they spend on a given interval arecarried out in Subsections 3.1 and 3.2, respectively. In Subsection 3.3 we show that theprobability for them to hit a specific site to the right is uniformly bounded from below.Section 4 introduces the regenerative structure for the ρ –truncated random walks and inSubsection 4.1 we give estimates on the regeneration times. The existence and positivityof the limiting speed for the truncated walks is proven in Subsection 4.2.In Section 5 we characterize the density of the invariant measure for the process viewedfrom the ρ –truncated walker with respect to the original law of the environment. In A. FAGGIONATO, N. GANTERT, AND M. SALVI
Subsection 5.1 we bound the Radon-Nikodym derivative from above by an L function,while in Subsection 5.2 we give a uniform lower bound. In Subsection 5.3 we finally passto the limit ρ → ∞ and obtain an invariant measure for the environment viewed from thenon-truncated walker and show that it is also absolutely continuous with respect to P (seeLemma 5.9).To conclude, in Sections 6, 7 and 8 we prove, respectively, parts (i), (ii) and (iii) ofTheorem 1. Some technical results are collected in the Appendixes A, B and C.2. Mott random walk and main results
One-dimensional Mott random walk is a particular random walk in a random environ-ment. The environment ω is given by a double–sided sequence ( Z k , E k ) k ∈ Z of randomvariables, with Z k ∈ (0 , + ∞ ) and E k ∈ R for all k ∈ Z . We denote by Ω the space of allenvironments, by P and E the law of the environment and the associated expectation, re-spectively. Given ℓ ∈ Z , we define the shifted environment τ ℓ ω as τ ℓ ω := ( Z k + ℓ , E k + ℓ ) k ∈ Z .Our main assumptions on the environment are the following:(A1) The sequence ( Z k , E k ) k ∈ Z is stationary and ergodic with respect to shifts;(A2) E [ Z ] is finite;(A3) P ( ω = τ ℓ ω ) = 0 for all ℓ ∈ Z ;(A4) There exists d > P ( Z ≥ d ) = 1.We postpone to Subsection 2.1 some comments on the above assumptions.It is convenient to introduce the sequence ( x k ) k ∈ Z of points on the real line, where x = 0 and x k +1 = x k + Z k . Then the environment ω can be thought also as the markedsimple point process ( x k , E k ) k ∈ Z , which will be denoted again by ω (with some abuse ofnotation). In this case, the ℓ –shift reads τ ℓ ω = ( x k + ℓ − x ℓ , E k + ℓ ) k ∈ Z . For physical reasons, E k is called the energy mark associated to point x k , while Z k is the interpoint distance(between x k − and x k ).Fix now a symmetric and bounded (from below by u min and from above by u max )measurable function u : R × R → R . Given an environment ω , the Mott random walk( Y t ) t ≥ is the continuous–time random walk on { x k } k ∈ Z with probability rate r x i ,x k ( ω ) := exp {−| x i − x k | + u ( E i , E k ) } (2)for a jump from x i to x k = x i . For convenience, we set r x,x ( ω ) ≡
0. Note that the Mottwalk is well defined for P –a.a. ω . Indeed, since the interpoint distance is a.s. at least d andthe function u is uniformly bounded, the holding time parameter r x ( ω ) := P y r x,y ( ω ) canbe bounded from above by a constant C > x ∈ { x k } k ∈ Z , hence explosiondoes not take place.We now introduce a bias λ which corresponds to the intensity of the external field.For a fixed λ ∈ [0 , Y t ) t ≥ with environment ω is thecontinuous–time random walk on { x k } k ∈ Z with probability rates r λx,y ( ω ) = e λ ( y − x ) r x,y ( ω ) (3)for a jump from x to y = x . For convenience, we set r λx,x ( ω ) ≡ r λx ( ω ) := P y r λx,y ( ω ). When λ = 0, one recovers the original Mottrandom walk. Since, for λ ∈ (0 , r λx ( ω ) ≤ P k ∈ Z e − (1 − λ ) | k | d + u max < ∞ , thebiased Mott random walk with environment ω is well defined for P –a.a. ω .We can consider also the jump chain ( Y n ) n ≥ associated to the biased Mott randomwalk (we call it the discrete time Mott random walk ). Given the environment ω , ( Y n ) n ≥
0D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 5 is the discrete–time random walk on { x k } k ∈ Z with jump probabilities p λx,y ( ω ) := r λx,y ( ω ) r λx ( ω ) , x = y . (4)A similar definition holds for the unbiased case ( λ = 0).The following result concerns transience, sub-ballisticity and ballisticity: Theorem 1.
Fix λ ∈ (0 , . Then, for P –a.a. ω , the continuous time Mott randomwalk ( Y t ) t ≥ with environment ω , bias λ and starting at the origin satisfies the followingproperties: (i) Transience to the right: lim t →∞ Y t = + ∞ a.s. (ii) Ballistic regime: If E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ and u : R × R → R is continuous, then theasymptotic velocity v Y ( λ ) := lim t →∞ Y t t exists a.s., it is deterministic, finite and strictly positive (an integral representationof v Y is given in Section 7, see (85) and (95) ). (iii) Sub–ballistic regime: If E (cid:2) e (1 − λ ) Z − (1+ λ ) Z − (cid:3) = ∞ , (5) then v Y ( λ ) := lim t →∞ Y t t = 0 . (6) In particular, if E [ Z − | Z ] ≤ C for some constant C which does not depend on ω and E [e (1 − λ ) Z ] = ∞ , then condition (5) is satisfied and v Y ( λ ) = 0 .In addition, for P –a.a. ω the above properties remain valid (restricting to integer times n ≥ ) for the discrete time Mott random walk ( Y n ) n ≥ with environment ω , bias λ andstarting at the origin, and its velocity v Y ( λ ) := lim n →∞ Y n n . Remark 2.1.
In the case λ = 0 the Mott random walks Y t and Y n are recurrent and havea.s. zero velocity. Recurrence follows from [6, Thm. 1.2–(iii)] and the recurrence of thespatially homogeneous discrete time random walk on Z with probability to jump from x to y proportional to e −| x − y | . To see that the velocity is zero, set Q ( dω ) = r ( ω ) E [ r ( ω )] P ( dω ) . Q is a reversible and ergodic distribution for the environment viewed from the discretetime Mott random walker Y n (see [5] ). By writing Y n as an additive function of theprocess “environment viewed from the walker” and using the ergodicity of Q , one gets that v Y ( λ = 0) is zero a.s., for Q –a.a. ω and therefore for P –a.a. ω . Similarly, v Y ( λ = 0) = 0 a.s., for P –a.a. ω (use that P is reversible and ergodic for the environment viewed from Y t , see [14] ). Remark 2.2.
If the random variables Z k are i.i.d. (or even when only Z k , Z k +1 areindependent for every k ) and u is continuous, the above theorem implies the followingdichotomy: v Y ( λ ) > if and only if E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ , otherwise v Y ( λ ) = 0 . The sameholds for v Y ( λ ) . We point out that, if the Z k ’s are correlated, E (cid:2) e (1 − λ ) Z (cid:3) = + ∞ does notimply in general zero velocity (see Example 1 in Section 2.2). Remark 2.3.
Theorem 1 shows that there are cases in which the limiting speed v Y ( λ ) isnot continuous in λ . See Example 2 in Section 2.2. A. FAGGIONATO, N. GANTERT, AND M. SALVI
Remark 2.4.
When considering the nearest neighbor random walk on { x k } k ∈ Z with prob-ability rate for a jump from x to a neighboring site y given by (3) , the random walk isballistic if and only if ∞ X i =1 exp (cid:8) (1 − λ ) Z − (1 + λ ) Z − i − λ ( Z − i +1 + · · · + Z − ) (cid:9) < ∞ . (7) A proof of this fact is given in Appendix C. As outlined in Remark 3.12, one can indeedweaken the condition E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ to prove ballisticity, albeit at the cost of dealingwith rather ugly formulas having some analogy with (7) . One of the most interesting technical results we use in the proof of Theorem 1, Part(ii), concerns the invariant measure for the environment seen from the point of view of thewalker:
Theorem 2.
Suppose that E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ and u : R × R → R is continuous. Thenthe following holds: (i) The environment viewed from the discrete time Mott random walker ( Y n ) n ≥ , i.e.the process (cid:0) τ φ ( Y n ) ω (cid:1) n ≥ where φ ( x i ) = i , admits an invariant and ergodic distri-bution Q ∞ absolutely continuous to P such that < γ ≤ d Q ∞ d P ≤ F , P –a.s. (8) for a suitable universal constant γ and a function F ∈ L ( P ) (defined in (64) ). (ii) By writing E ∞ for the expectation with respect to Q ∞ , the velocities v Y ( λ ) , v Y ( λ ) can be expressed as v Y ( λ ) = v Y ( λ ) E ∞ h /r λ ( ω ) i , (9) v Y ( λ ) = E [ Z ] E ∞ h X k ∈ Z k p λ ,x k ( ω ) i , (10) and the expectations in (9) , (10) are finite and positive (recall that r λ ( ω ) = P k r λ ,k ( ω ) ).Proof. Theorem 2–(i) is part of Proposition 5.3 given at the end of Section 5. The proofof Theorem 2–(ii) is part of Section 7, more precisely (9) and (10) are an immediateconsequence of (85), (94) and the observation just after (88). (cid:3)
Comments on assumptions (A1)–(A4).
By Assumption (A1) both random se-quences ( Z k ) k ∈ Z and ( E k ) k ∈ Z are stationary and ergodic with respect to shifts. The physi-cally interesting case is given by two independent random sequences ( Z k ) k ∈ Z and ( E k ) k ∈ Z ,the former stationary and ergodic, while the latter given by i.i.d. random variables. Inthis case assumption (A1) is satisfied (see Lemma B.4 in Appendix B).Assumption (A3) ensures that a.s. the environment is not periodic. If the energy marks E k are i.i.d. and non-constant, as in the physically interesting case, then (A3) is automat-ically fulfilled. Note that the sequence ( Z k ) k ∈ Z could be periodic, without violating ourassumptions (e.g. take Z k = 1 for all k ∈ Z ).Assumption (A4), corresponding to interpoint distances which are uniformly boundedfrom below, is not restrictive from a physical viewpoint and d can be taken of the angstromorder. On the other hand, (A4) plays a crucial role in our proofs. D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 7
Examples.
In this section we give two examples highlighting the importance of theassumptions in Theorem 1 and showing some of its consequences.
Example 1. E (cid:2) e (1 − λ ) Z (cid:3) = ∞ does in general not imply that v Y ( λ ) = 0 , v Y ( λ ) = 0 . We set u ( · , · ) ≡ p ∈ (0 , / Z k ) k ∈ Z as the reversible Markovchain with values in { γ, γ, γ, . . . } for some γ ≥ P ( kγ, ( k + 1) γ ) = p for k ≥ ,P ( kγ, ( k − γ ) = 1 − p for k ≥ ,P ( γ, γ ) = 1 − p . The equilibrium distribution is given by π ( kγ ) = c ( p/ (1 − p )) k for k ≥ c beingthe normalizing constant. Hence, P ( Z = kγ ) = π ( kγ ), for each k ≥
1. Notice that E (cid:2) e (1 − λ ) Z (cid:3) = c P k ≥ e (1 − λ ) kγ (cid:0) p/ (1 − p ) (cid:1) k is infinite if and only if λ ≤ − γ log 1 − pp . (11)We now show that we can choose the parameters such that E (cid:2) e (1 − λ ) Z (cid:3) = ∞ and P k kr λ ,x k ( ω ) > ω , the latter implying that v Y ( λ ) , v Y ( λ ) > p λ ,x k ( ω ) in (4).If Z = jγ , for some j ≥
1, the local drift P k kr λ ,x k ( ω ) can be bounded from below bythe drift of the configuration with longer and longer interpoint distances to the right andshorter and shorter interpoint distances to the left: Z k = ( j + k ) γ for all k ≥ − j + 1 and Z k = γ for all k ≤ − j . Note that in this case x k = γ (cid:2) kj + k ( k − (cid:3) if k ≥ ,x − k = − γ (cid:2) kj − k ( k +1)2 (cid:3) if 1 ≤ k ≤ j − ,x − k = − γ (cid:2) j ( j − + k − j + 1 (cid:3) if k ≥ j . Hence we can write X k kr λ ,x k ( ω ) ≥ A ( λ, γ, j ) − B ( λ, γ, j ) − C ( λ, γ, j ) , where A ( λ, γ, j ) = P k ≥ k exp {− (1 − λ ) γ ( kj + k ( k − / } , B ( λ, γ, j ) = P ≤ k ≤ j − k exp {− (1+ λ ) γ ( kj − k ( k + 1) / } and C ( λ, γ, j ) = P k ≥ j k exp {− (1 + λ ) γ ( j ( j − / k − j + 1) } . Webound A ( λ, γ, j ) from below with its first summand exp {− (1 − λ ) γj } and prove thatlim γ →∞ sup j ∈ N (cid:2) B ( λ, γ, j ) + C ( λ, γ, j ) (cid:3) exp { (1 − λ ) γj } < , (12)since this will imply the positivity of the local drift P k kr λ ,x k ( ω ) for any possible ω , for γ big enough. Using that Z − = γ ( j −
1) we bound B ( λ, γ, j ) ≤ j exp {− (1 + λ ) γ ( j − } and, using that j ( j − / ≥ j/
2, we bound C ( λ, γ, j ) ≤ e − (1+ λ )2 γj (cid:16) X k ≥ j ( k − j )e − (1+ λ ) γ ( k − j ) + j X k ≥ e − (1+ λ ) γk (cid:17) ≤ jK e − (1+ λ )2 γj , for some constant K > λ and γ (recall that γ ≥ λ > /
3, for γ big enough. On the other hand, by (11)we can choose p close enough to 1 / E (cid:2) e (1 − λ ) Z (cid:3) is infinite. A. FAGGIONATO, N. GANTERT, AND M. SALVI
Example 2.
The velocities v Y ( λ ) , v Y ( λ ) are not continuous in general. Take u ≡
0. Let the Z k be i.i.d. random variables larger than 1 such that e Z hasprobability density f ( x ) := cx γ (ln x ) [ e, + ∞ ) ( x ), with γ ∈ (1 ,
2) and c is the normalizingconstant. Since, for x ≥ e , x γ (ln x ) ≤ x (ln x ) = − ddx (1 / ln x ), the constant c is welldefined and E [ e (1 − λ ) Z ] = R ∞ e cx − λ x γ (ln x ) dx < ∞ if and only if λ ≥ − γ . Note that λ c := 2 − γ ∈ (0 , v Y ( λ ) , v Y ( λ )are zero for λ ∈ (0 , λ c ) and are strictly positive for λ ∈ [ λ c , A class of random walks on Z with jumps of size at most ρ ∈ [1 , + ∞ ]Take λ ∈ [0 , i, j ∈ Z we replace, with a slight abuse of notation, r λi,j ( ω ) := r λx i ,x j ( ω ) and the associated conductance c i,j ( ω ) := e λx i r λi,j ( ω ) (note that in c i,j ( ω ) thedependence on λ has been omitted). Hence we have c i,i ≡ c i,j ( ω ) = e λ ( x i + x j ) −| x j − x i | + u ( E i ,E j ) = c j,i ( ω ) i = j in Z . (13)Given ρ ∈ N + ∪ { + ∞} we introduce the discrete time random walk ( X ρn ) n ≥ with environ-ment ω as the Markov chain on Z such that the ω –dependent probability to jump from i to j in one step is given by c i,j ( ω ) / P k ∈ Z c i,k ( ω ) , if 0 < | i − j | ≤ ρ | i − j | > ρ − P j : | j − i |≤ ρ c i,j ( ω ) / P k ∈ Z c i,k ( ω ) if i = j. (14) Warning 3.1.
When the Markov chain ( X ρn ) n ≥ starts at i ∈ Z , we write P ω,ρi for itslaw and E ω,ρi for the associated expectation. In order to make the notation lighter, inside P ω,ρi ( · ) and E ω,ρi [ · ] we will usually write X n instead of X ρn . It is convenient to introduce the random bijection ψ : Z → { x k } k ∈ Z defined as ψ ( i ) = x i ,and also the continuous time random walk ( X ∞ t ) t ≥ on Z with probability rate r λi,j ( ω ) fora jump from i to j . Since c i,j ( ω ) P k ∈ Z c i,k ( ω ) = r λi,j ( ω ) P k ∈ Z r λi,k ( ω ) , we conclude that realizations of Y and Y can be obtained as Y t = ψ ( X ∞ t ) , Y n = ψ ( X ∞ n ) . (15)In particular, when the denominators are nonzero, we can write Y t t = ψ ( X ∞ t ) X ∞ t X ∞ t t , Y n n = ψ ( X ∞ n ) X ∞ n X ∞ n n . (16)By Assumptions (A1) and (A2), lim i →∞ ψ ( i ) /i = E [ Z ] < ∞ , P –a.s.. By this limit,together with (15) and (16), we will see in Sections 7, 8 that in order to prove Theorem 1it is enough to show the same properties for X ∞ , X ∞ instead of Y , Y .In what follows, we write v X ρ ( λ ) = v if, for P –a.a. ω , lim n →∞ X ρn n = v P ω,ρ –a.s.. Asimilar meaning is assigned to v X ρ ( λ ). D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 9
Estimates on effective conductances.
We take again ρ ∈ N + ∪ { + ∞} and λ ∈ [0 , A, B disjoint subsets of Z , we introduce the effective conductance between A and B as C ρ eff ( A ↔ B ) := min n X i Note that C ρ eff ( A ↔ B ) , π ρ ( i ) and p ρ esc ( i ) depend on the environment ω ,although we have omitted ω from the notation. Proposition 3.3. There exists a constant K > not depending on ω, ρ, A and B suchthat C ( A ↔ B ) ≤ C ρ eff ( A ↔ B ) ≤ K C ( A ↔ B ) . Proof. Since C ρ eff ( A ↔ B ) is increasing in ρ , it is enough to show the second inequality for ρ = ∞ . To this aim take any valid f : Z → R and note that X i There exists a constant K > which does not depend on ω, ρ such that π ( k ) ≤ π ρ ( k ) ≤ Kπ ( k ) , ∀ k ∈ Z . Proof. Since π ρ ( k ) is increasing in ρ it is enough to prove that π ∞ ( k ) ≤ Kπ ( k ) for all k ∈ Z . We easily see that X j>k c k,j = e λ ( x k +1 + x k ) − ( x k +1 − x k ) X j>k e − (1 − λ )( x j − x k +1 )+ u ( E j ,E k ) ≤ K c k,k +1 X j>k e − d ( j − k − − λ ) = K c k,k +1 . (22)Analogously, X j Corollary 3.5. There exist constants K , K > which do not depend on ω, ρ such that K p ( i ) ≤ p ρ esc ( i ) ≤ K p ( i ) , ∀ i ∈ Z . The following lemma is well known and corresponds to formula (2.1.4) in [29]: Lemma 3.6. Let { ¯ c k,k +1 } k ∈ Z be any system of strictly positive conductances on the nearestneighbor bonds of Z . Let H A be the first hitting time of the set A ⊂ Z for the associateddiscrete time nearest–neighbor random walk among the conductances { ¯ c k,k +1 } k ∈ Z , whichjumps from k to k ± with probability ¯ c k,k ± / (¯ c k,k − + ¯ c k,k +1 ) . Take −∞ < M < x < N < ∞ , with M, x, N ∈ Z and write H M , H N for H { M } , H { N } . Then P n.n.x ( H M < H N ) = C n.n. eff ( x ↔ ( −∞ , M ]) C n.n. eff ( x ↔ ( −∞ , M ] ∪ [ N, ∞ )) , where P n.n.x is the probability for the nearest-neighbor random walk starting at x , and C n.n. eff ( A ↔ B ) is the effective conductance of the nearest-neighbor walk between A and B . We state another technical lemma which will be frequently used when dealing withconductances: D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 11 Lemma 3.7. P (cid:16)P ∞ j =0 1 c j,j +1 < + ∞ (cid:17) = 1 .Proof. By assumption (A1), ( x j +1 − x j ) j ∈ Z is a stationary ergodic sequence. By writing x j = P j − k =0 ( x k +1 − x k ), the ergodic theorem implies that lim j →∞ x j j = E [ x ], P –a.s. As aconsequence we get thatlim j →∞ − λ ( x j + x j +1 ) + ( x j +1 − x j ) j = − λ E [ x ] < , P –a.s.Since P ∞ j =0 1 c j,j +1 = P ∞ j =0 e − λ ( x j + x j +1 )+( x j +1 − x j ) , the claim follows. (cid:3) We conclude this section with a simple estimate leading to an exponential decay of thetransition probabilities: Lemma 3.8. There exists a constant K which does not depend on ω, ρ , such that P –a.s. P ω,ρi ( | X − i | > s ) ≤ X j : | j − i | >s c i,j P k ∈ Z c i,k ≤ Ke − ds (1 − λ ) ∀ s, ρ ∈ N + ∪ { + ∞} , ∀ i ∈ Z . (24) Proof. The first inequality follows from the definitions. To prove the second one, we canestimate X j>i + s c i,j = e λ ( x i +1 + x i ) − ( x i +1 − x i ) X j>i + s e − ( x j − x i +1 )(1 − λ )+ u ( E j ,E i ) ≤ K c i,i +1 X j>i + s e − d ( j − i − − λ ) = K e − ds (1 − λ ) c i,i +1 , (25) X ji − s e − d ( i − − j )(1 − λ ) = K e − ds (1 − λ ) c i − ,i . (26)The second bound in (24) now follows from (25), (26) and Lemma 3.4. (cid:3) Expected number of visits. We fix some notations which will be frequently usedbelow. For I ⊆ { , , , . . . } and A ⊂ Z , we denote by N ρI ( A ) the time spent by therandom walk X ρn in the set A during the time interval I : N ρI ( A ) := X k ∈ I X ρk ∈ A . If I := { , , , . . . } we simply write N ρ ∞ ( A ) and if A = { x } we write N ρ ∞ ( x ). Warning 3.9. When appearing inside P ω,ρ ( · ) or E ω,ρ ( · ) , N I ( A ) , N ∞ ( A ) will usually re-place N ρI ( A ) , N ρ ∞ ( A ) . We can state our main result on the expected number of visits to a site k for a givenenvironment: Proposition 3.10. There exists a constant K , not depending on λ, ρ, ω , such that thefunction g ω : { , , . . . } → R + , defined as g ω ( n ) := K π ( − n ) ∞ X j =0 e − λx j +(1 − λ )( x j +1 − x j ) , n ≥ , (27) satisfies E ω,ρ [ N ∞ ( k )] ≤ g ω ( | k | ) , ∀ k ≤ . (28)We recall that π ( k ) = c k − ,k + c k,k +1 for all k ∈ Z . Moreover, we point out that theseries in (27) is finite, since it can be bounded by P ∞ j =0 exp (cid:8) − j (2 λd − (1 − λ ) x j +1 − x j j ) (cid:9) ,while ( x j +1 − x j ) /j → P –a.a. ω (see the argument in the proof of Lemma 3.7).We remark that estimate (28) is not uniform in the environment ω , and in general onecannot expect a uniform bound. This technical fact represents a major difference withthe setting of [8], where the existence of an ω -independent upper bound of the expectednumber of visits is required (cf. Condition D therein). Proof. Fix k < 0. Starting from 0, the random variable N ρ ∞ ( k ) is equal to N ρ ∞ ( k ) = ( − P ω,ρ ( X · eventually reaches k ) Y ( k ) with probability P ω,ρ ( X · eventually reaches k )where Y ( k ) is a geometric random variable whose parameter is the escape probability p ρ esc ( k ) from k (recall Warning 3.2). Therefore E ω,ρ [ N ∞ ( k )] = 1 p ρ esc ( k ) P ω,ρ ( X · eventually reaches k ) . (29)Let us start by giving an upper bound for the probability of reaching k in finite time: P ω,ρ ( X · eventually reaches k ) ≤ P ω,ρ ( X · eventually reaches A := ( −∞ , k ] )= lim N →∞ P ω,ρ ( H B N > H A ) , (30)where B N := [ N, ∞ ) and the H ’s are the hitting times of the respective sets. By awell-known formula (see [3, Proof of Fact 2]) P ω,ρ ( H B N > H A ) ≤ C ρ eff (0 ↔ A ) C ρ eff (0 ↔ A ∪ B N ) . (31)Using now Proposition 3.3 we have that there exists a K such that P ω,ρ ( X · eventually reaches k ) ≤ lim N →∞ K C (0 ↔ A ) C (0 ↔ A ∪ B N )= K C (0 ↔ A ) C (0 ↔ A ∪ B ∞ ) . (32)where C (0 ↔ A ∪ B ∞ ) := lim N →∞ C (0 ↔ A ∪ B N ).Call C N := ( −∞ , − N + k ] ∪ [ N + k, ∞ ). By Corollary 3.5 and equation (20), we knowthat p ρ esc ( k ) ≥ K lim N →∞ C ( k ↔ C N ) π ( k ) = 1 K C ( k ↔ C ∞ ) π ( k ) , (33)where C ( k ↔ C ∞ ) := lim N →∞ C ( k ↔ C N ).Since we have conductances in series, we can write C ( k ↔ C ∞ ) = (cid:16) k − X j = −∞ c j,j +1 (cid:17) − + (cid:16) ∞ X j = k c j,j +1 (cid:17) − . (34)We claim that k − X j = −∞ c j,j +1 = + ∞ , ∞ X j = k c j,j +1 < + ∞ P –a.s. (35) D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 13 Indeed, the first series diverges a.s. since, for j ≤ − 1, 1 /c j,j +1 ≥ Ke − λ ( x j + x j +1 )+( x j +1 − x j ) ≥ K (note that x j , x j +1 ≤ E ω,ρ [ N ∞ ( k )] ≤ ¯ K π ( k ) C ( k ↔ C ∞ ) · C (0 ↔ A ) C (0 ↔ A ∪ B ∞ )= ¯ K π ( k ) (cid:16) k − X j = −∞ c j,j +1 (cid:17) − + (cid:16) ∞ X j = k c j,j +1 (cid:17) − · (cid:16) − X j = k c j,j +1 (cid:17) − (cid:16) − X j = k c j,j +1 (cid:17) − + (cid:16) ∞ X j =0 1 c j,j +1 (cid:17) − = ¯ K π ( k ) (cid:16) ∞ X j =0 1 c j,j +1 (cid:17) ≤ K π ( k ) ∞ X j =0 e − λ ( x j + x j +1 )+( x j +1 − x j ) ≤ g ω ( | k | ) . (36)We now consider the case k = 0. By (29), (33) and (35) we have E ω,ρ [ N ∞ (0)] = 1 p ρ esc (0) ≤ K π (0) C (0 ↔ C ∞ ) = Kπ (0) ∞ X j =0 1 c j,j +1 and we can conclude as in (36). (cid:3) We now collect some properties of the function g ω : Lemma 3.11. There exist constants K ∗ > which do not depend on ρ, ω , such that π ( k ) ≤ K ∗ e λdk , k ≤ , (37) E [ g ω ( k )] ≤ K ∗ e − λdk − e − λd E (cid:2) e (1 − λ ) x (cid:3) , k ≥ , (38) g ω ( k ) ≥ g τ ℓ ω ( k + ℓ ) , k, ℓ ≥ , (39) E E ω,ρk [ N ∞ ( Z − )] ≤ K ∗ (cid:18) − e − λd ) + | k | − e − λd (cid:19) E (cid:2) e (1 − λ ) x (cid:3) , k ≤ . (40)Trivially, the first and fourth estimates are effective when E (cid:2) e (1 − λ ) x (cid:3) < ∞ . Proof. We first prove (37). Recall π ( k ) = c k − ,k + c k,k +1 . Given i ≤ x i ≤ id ,implying c i − ,i ≤ e u max e λ ( x i − + x i ) − ( x i − x i − ) ≤ Ke λdi . By the same argument, for i < c i,i +1 ≤ Ke λdi and, for i = 0, c , = e λx − x + u ( E ,E ) ≤ K .(38) is obtained noting that, by (37), E [ g ω ( k )] ≤ K ∗ e − λdk P ∞ j =0 e − λjd E (cid:2) e (1 − λ ) x (cid:3) .To get (39) we first observe that x i − ℓ ( τ ℓ ω ) = x i ( ω ) − x ℓ ( ω ) and E i − ℓ ( τ ℓ ω ) = E i ( ω ) forall i ∈ Z . As a consequence, we get π ( − k − ℓ )[ τ ℓ ω ] = e − λx ℓ π ( − k ) (the r.h.s. refersto the environment ω ). Therefore, using also that x j ( τ ℓ ω ) = x j + ℓ ( ω ) − x ℓ ( ω ) and that x j +1 ( τ ℓ ω ) − x j ( τ ℓ ω ) = x j +1+ ℓ ( ω ) − x j + ℓ ( ω ), we have g τ ℓ ω ( k + ℓ ) = K π ( − k )e − λx ℓ ( ω ) ∞ X j =0 e − λx j ( τ ℓ ω )+(1 − λ )( x j +1 ( τ ℓ ω ) − x j ( τ ℓ ω )) = K π ( − k ) ∞ X j =0 e − λx j + ℓ +(1 − λ )( x j +1+ ℓ − x j + ℓ ) ≤ g ω ( k ) , (41) thus completing the proof of (39).Finally, for (40), we write, thanks to Proposition 3.10, E E ω,ρk [ N ∞ ( Z − )] = X z ≤ k E E ω,ρk [ N ∞ ( z )] + X k In the spirit of Remark 2.4, we point out that we could consider weakerconditions than E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ , at the cost of dealing with rather involved formulas.Take for simplicity u ≡ . In our case, E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ guarantees, by Lemma 3.11,that E [ g ω ( k )] is finite and summable over k ≥ . But what is actually required is that g ω ( k ) bounds from above the quantity α ω ( k ) := Kπ ( − k ) P j ≥ c j,j +1 (see the proof ofProp. 3.10). By stationarity, one has E [ c k,k +1 /c k + i,k + i +1 ] = E [ e − (1+ λ ) Z − λ ( Z + ··· + Z i − )+(1 − λ ) Z i ] . This identity allows to provide conditions for P k ≥ E [ α ω ( k )] to be finite, which are weakerthan E (cid:2) e (1 − λ ) Z (cid:3) < + ∞ . One could go on in weakening conditions, also inside Prop. 5.4,and still get the ballisticity of the Mott random walks Y t and Y n . Corollary 3.13. There exist constants K , K > which do not depend on ρ, ω such that E ω,ρ [ N ∞ ( k )] ≤ K E ω, [ N ∞ ( k )] ≤ K g ω ( | k | ) ∀ k ≤ , (42) E ω,ρ [ N ∞ ( k )] ≤ K E ω, [ N ∞ ( k )] ∀ k > . (43) Proof. First we consider (42). Its second inequality is a restatement of Prop. 3.10. Forthe first inequality we distinguish the cases k < k = 0. When k < P ω,ρ ( X · eventually reaches k ) ≤ K P ω, ( X · eventually reaches k ) . (44)Then put together equation (29) (and its analogous version for ρ = 1), equation (44) andCorollary 3.5. For k = 0 use that E ω,ρ [ N ∞ (0)] = p ρ esc (0) (also in the case ρ = 1) and useCorollary 3.5.Let us now consider equation (43). Start with (29). Due to Corollary 3.5 and thefact that P ω, ( X · eventually reaches k ) = 1 for each k > (cid:3) Probability to hit a site on the right. Following [8], given k, z ∈ Z , we set T ρz := inf { n ≥ X ρn ≥ z } , T ρ := T ρ , r ρk ( z ) := P ω,ρk ( X T z = z ) . Note that the dependence of ω has been omitted. Again (see Warnings 3.1 and 3.9), wesimply write T z , r k ( z ) inside P ω,ρk ( · ), E ω,ρk ( · ). Lemma 3.14. For P –a.a. ω and for each ρ ∈ N + ∪ {∞} it holds that P ω,ρk ( T z < ∞ ) = 1 ∀ k < z in Z . D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 15 Proof. Without loss of generality we take k < z and prove that P ω,ρk ( T = ∞ ) = 0.As in (31), setting C N := ( −∞ , − N ] and D = [0 , ∞ ), we can bound P ω,ρk ( T = ∞ ) = lim N →∞ P ω,ρk ( τ C N < τ D ) ≤ lim inf N →∞ C ρ eff ( k ↔ C N ) C ρ eff ( k ↔ C N ∪ D ) . We observe that C ( k ↔ C N ∪ D ) = C ( k ↔ C N ) + C ( k ↔ D ), while (recall (35))lim N →∞ C ( k ↔ C N ) = (cid:16) k − X j = −∞ c j,j +1 (cid:17) − = 0 , C ( k ↔ D ) = (cid:16) − X j = k c j,j +1 (cid:17) − > . Together with Proposition 3.3, this allows to conclude that P ω,ρk ( T = ∞ ) = 0. (cid:3) Our next result, Lemma 3.15, is the analog of Lemma 3.1 in [8]. Our proof followsa different strategy in order to avoid to deal with Conditions D, E of [8], which are notsatisfied in our context. Lemma 3.15. There exists ε > which does not depend on ρ, ω such that, P –a.s., r ρk (0) ≥ ε for all k < and for all ρ ∈ N + ∪ {∞} .Proof. We just make a pathwise analysis. By the Markov property we get r ρk (0) = X − ρ ≤ j< ∞ X n =1 P ω,ρk ( X n = 0 , X n − j , X , X , ..., X n − < X − ρ ≤ j< ∞ X n =1 P ω,ρj ( X = 0) P ω,ρk ( X n − j , X , X , ..., X n − < . (45)We claim that there exists ε > j and ω , P ω,ρj ( X = 0) ≥ ε P ω,ρj ( X ≥ . In fact, given j with − ρ ≤ j < 0, we can write P ω,ρj ( X = 0) P ω,ρj ( X ≥ ≥ c j, P ∞ l =0 c j,l ≥ K e λx j + x j P ∞ l =0 e λ ( x l + x j ) − ( x l − x j ) = K P ∞ l =0 e − (1 − λ ) x l ≥ K P ∞ l =0 e − (1 − λ ) dl =: 2 ε. Coming back to (45), using the Markov property and the fact that T < ∞ a.s., we get r ρk (0) ≥ ε X − ρ ≤ j< ∞ X n =1 P ω,ρj ( X ≥ P ω,ρk ( X n − j , X , X , ..., X n − < ε X − ρ ≤ j< ∞ X n =1 P ω,ρk ( X n ≥ , X n − j , X , X , ..., X n − < ε P ω,ρk ( X T ≥ 0) = 2 ε. (cid:3) Regenerative structure for the ρ –truncated random walk with ρ < ∞ In this section we take ρ < ∞ . We recall the regenerative structure of [8] for the ρ -truncated random walk with ρ finite. Warning 4.1. In order to avoid heavy notation, in this section ρ is fixed once and forall in N + and we write P ωx , T k , r k ( z ) , X n ,... instead of P ω,ρx , T ρk , r ρk ( z ) , X ρn ,... The wholesection refers to the ρ -truncated random walk. Only in Subsection 4.2, in which we collectthe main conclusions, we will indicate ρ explicitly according to the usual notation. Consider a sequence of i.i.d. Bernoulli r.v.’s ζ , ζ , ... with parameter P ( ζ = 1) = ε (thesame ε as in Lemma 3.15) which does not depend on the environment ω . P and E denotethe probability law and the expectation of the ζ ’s. We couple the sequence ζ = ( ζ , ζ , ... )with the random walk X n in such a way that ζ j = 1 implies X T jρ = jρ .To this aim we construct the quenched measure P ω,ζ of the random walk starting at 0once both ω and ζ are fixed. Recall Lemma 3.15. First, the law of ( X n ) n ≤ T ρ is defined by { ζ =1 } P ω ( ·| X T ρ = ρ )+ { ζ =0 } h r ( ρ ) − ε − ε P ω ( ·| X T ρ = ρ )+ 1 − r ( ρ )1 − ε P ω ( ·| X T ρ > ρ ) i . (46)Then, given j ≥ X T jρ = y ∈ [ jρ, ( j + 1) ρ ), the law of ( X n ) n ∈ [ T jρ +1 ,T ( j +1) ρ ] is { ζ j +1 =1 } P ωy ( ·| X T ( j +1) ρ = ( j + 1) ρ )+ { ζ j +1 =0 } h r y (( j + 1) ρ ) − ε − ε P ωy ( ·| X T ( j +1) ρ = ( j + 1) ρ )+ 1 − r y (( j + 1) ρ )1 − ε P ωy ( ·| X T ( j +1) ρ > ( j + 1) ρ ) i . (47)One can check that, by averaging P ω,ζ over ζ , one obtains the law P ω of the originalrandom walk ( X n ) n ≥ .We introduce by iteration the sequence ( ℓ k ) k ≥ as follows: ℓ := 0 , ℓ k +1 = min { j > ℓ k : ζ j = 1 } k ≥ . Note that by construction we have X T ℓkρ = ℓ k ρ .Given k ≥ C k := (cid:0) τ X j ω : T ℓ k ρ ≤ j < T ℓ k +1 ρ (cid:1) . As in [8] one can prove the followingresult (cf. [8, Lemma 3.2] and the corresponding proof): Lemma 4.2. Let ρ < ∞ . Then the sequence of random pieces ( C k ) k ≥ is stationary andergodic under the measure P ⊗ P ⊗ P ω,ζ . In particular, τ ℓ k ρ ω has the same law P as ω forall k = 1 , , ... . As in [27], the fact that ( C k ) k ≥ is stationary and ergodic can be restated as follows:under P ⊗ P ⊗ P ω,ζ the random path ( X n ) n ≥ with time points 0 < T ℓ ρ < T ℓ ρ < . . . is cycle–stationary and ergodic . This is the regenerative structure pointed out in [8].In what follows, we will consider also the random walk ( X n ) n ≥ starting at x and withlaw P ω,ζx . This random walk is built as follows. Fix a such that x ∈ [ aρ, ( a + 1) ρ ). Then,the law of ( X n ) n ≤ T ( a +1) ρ is defined by (47) with j replaced by a and y replaced by x . Notethat T aρ = 0. Given j ≥ a + 1 and X T jρ = y ∈ [ jρ, ( j + 1) ρ ), the law of ( X n ) n ∈ [ T jρ +1 ,T ( j +1) ρ ] is then given by (47). Again, the average over ζ of P ω,ζx gives P ωx .4.1. Estimates on the regeneration times. As in [8] we set P ′ := P ⊗ P , E ′ [ · ] = E [ E [ · ]] . In what follows we assume that E (cid:2) e (1 − λ ) x (cid:3) < ∞ . D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 17 Lemma 4.3. Let ρ < ∞ . There exist constants K , K > not depending on ω, ρ suchthat E [ E ω,ζ [ T ℓ ρ ]] ≥ K ρ , (48) E ′ [ E ω,ζ [ T ℓ ρ ]] ≤ K (cid:16) − e − λd ) + ρ − e − λd (cid:17) E (cid:2) e (1 − λ ) x (cid:3) . (49) Proof. Take a sequence Y , Y , . . . of i.i.d. positive random variables with P ( Y i ≥ s ) =( Ke − ds (1 − λ ) ) ∧ s ≥ K being the constant appearing in Lemma 3.8. Dueto this lemma, under P ω , X k is stochastically dominated by Y + · · · + Y k for any k ≥ ζ and for all j ≥ E ω,ζy [ T ( j +1) ρ ] ≤ ε (1 − ε ) E ωy [ T ( j +1) ρ ] (50)for all y ∈ [ jρ, ( j + 1) ρ − ℓ = k we can write T ℓ ρ = T ρ + ( T ρ − T ρ ) + ... + ( T kρ − T ( k − ρ ) . Now for each j ≥ E ω,ζ [ T ( j +1) ρ − T jρ ] = X y ∈ [ jρ, ( j +1) ρ ) E ω,ζy [ T ( j +1) ρ ] P ω,ζ ( X T jρ = y ) ≤ ε (1 − ε ) X y ∈ [ jρ, ( j +1) ρ ) E ωy [ T ( j +1) ρ ] P ω,ζ ( X T jρ = y )where we have used (50). Now we see that, for any y ∈ [ jρ, ( j + 1) ρ ), E ωy [ T ( j +1) ρ ] ≤ E ωy [ N ∞ (( −∞ , ( j + 1) ρ ])] ≤ K E ω, y [ N ∞ (( −∞ , ( j + 1) ρ ])] ≤ K E ω, jρ [ N ∞ (( −∞ , ( j + 1) ρ ])] , where the second inequality is due to Corollary 3.13.Hence E ω,ζ [ T ( j +1) ρ − T jρ ] ≤ K ε (1 − ε ) E ω, jρ [ N ∞ (( −∞ , ( j + 1) ρ ])] ≤ K ε (1 − ε ) (cid:16) X z ≤ jρ E ω, jρ [ N ∞ ( z )] + X jρ Let ρ < ∞ . Given k ≤ it holds E ω,ζ [ N ∞ ( k )] ≤ ε (1 − ε ) ∞ X j =0 g τ jρ ω ( | k | + jρ ) . (52) Proof. As for the derivation of (33) in [8] one can prove that, if y ∈ [ jρ, ( j + 1) ρ ), then E ω,ζy h N [ T jρ ,T ( j +1) ρ ) ( k ) i ≤ ε (1 − ε ) E ωy h N [ T jρ ,T ( j +1) ρ ) ( k ) i . (53)On the other hand, by applying Prop. 3.10 , we get E ωy h N [ T jρ ,T ( j +1) ρ ) ( k ) i ≤ E ωy [ N ∞ ( k )] = E τ y ω [ N ∞ ( k − y )] ≤ g τ y ω ( | k | + y ) . (54)At this point we write y as y = jρ + ℓ and set ω ′ := τ jρ ω . Then, by applying (39) inLemma 3.11, we get g τ y ω ( | k | + y ) = g τ ℓ ω ′ ( | k | + jρ + ℓ ) ≤ g ω ′ ( | k | + jρ ) = g τ jρ ω ( | k | + jρ ) . (55)As a byproduct of (53), (54) and (55) we conclude that E ω,ζy h N [ T jρ ,T ( j +1) ρ ) ( k ) i ≤ ε (1 − ε ) g τ jρ ω ( | k | + jρ ) . (56)The above bound and the strong Markov property applied at time T jρ (which holds byconstruction of P ω,ζ ) imply that E ω,ζ h N [ T jρ ,T ( j +1) ρ ) ( k ) i = E ω,ζ h E ω,ζX Tjρ h N [ T jρ ,T ( j +1) ρ ) ( k ) ii ≤ ε (1 − ε ) g τ jρ ω ( | k | + jρ ) . (57)Since N ∞ ( k ) = P ∞ j =0 N [ T jρ ,T ( j +1) ρ ) ( k ), the above bound (57) implies (52). (cid:3) Speed for the truncated process. Recall that ρ < ∞ is fixed and recall Warning4.1. Here we follow the usual notation, indicating explicitly ρ , and we also write P ω,ζ,ρ instead of P ω,ζ to stress the dependence on ρ . Proposition 4.5. Fix ρ < + ∞ . For P –a.a. ω ∈ Ω it holds v Xρ ( λ ) := lim n →∞ X ρn n = ρE [ ℓ ] E ′ [ E ω,ζ [ T ρℓ ρ ]] = ρε E ′ [ E ω,ζ [ T ρℓ ρ ]] P ω,ρ –a.s. (58) where ε is the same as in Lemma 3.15. Moreover, v Xρ ( λ ) does not depend on ω and v Xρ ( λ ) ∈ ( c , c ) (59) for strictly positive constants c , c , which do neither depend on ω nor on ρ . D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 19 Proof. We work on the probability space (Θ , P ⊗ P ⊗ P ω,ζ,ρ ) where Θ := { , } N + × Ω × Z N .For n ∈ [ T ρℓ k ρ , T ρℓ k +1 ρ ) we have ℓ k +1 ρ − ( T ρℓ k +1 ρ − T ρℓ k ρ ) ρ < X ρn < ℓ k +1 ρ (note in particularthat X ρn has to be thought as a function on Θ). It then follows ℓ k +1 ρ − ( T ρℓ k +1 − T ρℓ k ρ ) ρT ρℓ k +1 ρ < X ρn n < ℓ k +1 ρT ρℓ k ρ . (60)Due to the cycle stationarity and ergodicity stated in Lemma 4.2, we let n → ∞ in(60) and obtain that the limit in (58) holds P ⊗ P ⊗ P ω,ζ,ρ –a.s. On the other hand | X ρn /n | ≤ ρ . Hence, by the dominated convergence theorem, E [ X ρn /n ] converges to ρE [ ℓ ] / E ′ [ E ω,ζ [ T ρℓ ρ ]]. To conclude the proof of (58), it is enough to recall that, by aver-aging P ω,ζ,ρ over ζ , one obtains the law P ω,ρ of the original random walk ( X ρn ) n ≥ .Finally, we observe that v Xρ does not depend on ω since the last term in (58) doesn’t,and that v Xρ ( λ ) ∈ ( c , c ) due to (48) and (49). (cid:3) Stationary distribution Q ρ of the environment viewed from the ρ –walker In this section we assume that E (cid:2) e (1 − λ ) x (cid:3) < ∞ and we fix ρ < ∞ . We consider theprocess environment viewed from the ρ –walker , which is the Markov chain ( τ X ρn ω ) n ∈ N onthe space of environments Ω with transition mechanism induced by P ω,ρ . When startingwith initial distribution Q , we denote by P ρQ its law as probability distribution on Ω N .Lemma 4.2 and bound (49) in Lemma 4.3 guarantee (cf. [27, Sec. 4, Chapter 8]) theexistence of a stationary distribution Q ρ of the process environment viewed from the ρ –walker , such that Q ρ is absolutely continuous with respect to P .From [27][Eq. (4.14), Chapter 8], Q ρ can be characterized by its expectation: E ρ [ f ( ω )] = 1 E ′ [ E ω,ζ,ρ [ T ℓ ρ ]] E ′ h E ω,ζ,ρ h T ℓ ρ X k =1 f ( τ X k ω ) ii . (61)As in [8][Prop. 3.4] one can prove that Q ρ is absolutely continuous to P with Radon–Nikodym derivative d Q ρ d P ( ω ) = 1 E ′ [ E ω,ζ,ρ [ T ℓ ρ ]] X k ∈ Z EE τ − k ω,ζ,ρ h N T ℓ ρ ( k ) i . (62)Note that the denominator in the r.h.s. is finite due to (49) and the numerator is positive.As a consequence, P is also absolutely continuous to Q ρ . Lemma 5.1. Fix ρ ∈ N + . Then Q ρ is ergodic with respect to shifts for the environmentseen from the ρ -walker. Remark 5.2. The above ergodicity means that any Borel subset of the path space Ω N ,which is left invariant by shifts, has P ρ Q ρ –probability equal to or .Due to Theorem 6.9 in [28] (cf. also [22, Chapter IV] ), the above ergodicity is equivalentto the following fact: Q ρ ( A ) ∈ { , } whenever A ⊂ Ω is an invariant Borel set, in the sensethat “ τ X ρn ω ∈ A for any n ∈ N ” holds Q ρ ⊗ P ω,ρ –a.s. on { ω ∈ A } and “ τ X ρn ω ∈ A c for any n ∈ N ” holds Q ρ ⊗ P ω,ρ –a.s. on { ω ∈ A c } . As usual, Q ρ ⊗ P ω,ρ is the probability measure on Ω × Z N such that the expectation of a function f is given by R Q ρ ( dω ) E ω,ρ (cid:2) f ( ω, ( X n ) n ≥ ) (cid:3) . Proof of Lemma 5.1. The proof can be obtained as in [8, page 735–736]. The only differ-ence is that in [8] the authors use their formula (29), which is not satisfied in our case. Moreprecisely , they use their formula (29) to argue that 0 < P ( A ) < Q ρ –nontrivialset A . On the other hand, this claim follows simply from the absolute continuity of Q ρ to P . (cid:3) The rest of this section is devoted to the proof of the following result: Proposition 5.3. Suppose E (cid:2) e (1 − λ ) x (cid:3) < ∞ and that u : R × R → R is continuous.Then the sequence ( Q ρ ) ρ ∈ N + converges weakly to a unique measure Q ∞ as ρ → ∞ . Q ∞ isabsolutely continuous to P and, P –a.s., < γ ≤ d Q ∞ d P ≤ F (cf. (64) ). Furthermore, Q ∞ isinvariant and ergodic for the dynamics from the point of view of the ∞ –walker. Having Lemma 3.8 and Lemma 5.9 below, Proposition 5.3 can be proved by the samearguments used in [8, p. 735], with some slight modifications. For completeness, we givethe proof in Appendix A.5.1. Upper bound for the Radon-Nikodym derivative d Q ρ /d P .Proposition 5.4. Suppose E [ e (1 − λ ) x ] < ∞ . Then, uniformly in ρ ∈ N + , d Q ρ d P ( ω ) ≤ F ( ω ) P − a.s., (63) where F ( ω ) := Kπ (0) ∞ X j =0 ( j + 2) e − λx j +(1 − λ )( x j +1 − x j ) , (64) for some constant K > . Moreover, E [ F ] < ∞ . Before proving Prop. 5.4 we state a technical result: Lemma 5.5. Let F ∗ ( ω ) := K P ∞ i =0 ( i + 1)e − λx i +(1 − λ )( x i +1 − x i ) , with K as in Proposition3.10. Then ∞ X j =0 g τ jρ ω ( | k | + jρ ) ≤ ∞ X r =0 g τ r ω ( | k | + r ) ≤ π ( −| k | ) F ∗ ( ω ) . (65) Proof. The first inequality in (65) is trivial. We prove the second one. By (37) and (41)we can write ∞ X r =0 g τ r ω ( | k | + r ) ≤ K π ( −| k | ) ∞ X r =0 ∞ X j =0 e − λx j + r +(1 − λ )( x j +1+ r − x j + r ) ≤ K π ( −| k | ) ∞ X i =0 ( i + 1)e − λx i +(1 − λ )( x i +1 − x i ) (cid:3) We can now prove Prop. 5.4: Proof of Prop. 5.4. Due to (48) and (62) we can bound, for some constant C > d Q ρ d P ( ω ) ≤ Cρ [ H + ( ω ) + H − ( ω )] (66) Ergodicity means that the law P ∞ Q ∞ on the path space Ω N is ergodic with respect to shifts (cf. Remark5.2). D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 21 where H + ( ω ) := X k> EE τ − k ω,ζ,ρ h N T ℓ ρ ( k ) i , H − ( ω ) := X k ≤ EE τ − k ω,ζ,ρ h N T ℓ ρ ( k ) i . As a byproduct of Lemma 4.4 and Lemma 5.5 it holds (see the proof of (39) for theequality below) H − ( ω ) ≤ ε (1 − ε ) X k ≤ π ( k )[ τ − k ω ] F ∗ ( τ − k ω ) = 1 ε (1 − ε ) π (0) X k ≤ e − λx − k F ∗ ( τ − k ω ) . (67)Let us bound H + ( ω ). We can write ∞ X k =0 EE τ − k ω,ζ,ρ h N T ℓ ρ ( k ) i = ∞ X m =0 X k ∈ [ mρ, ( m +1) ρ ) ∞ X i =1 E h ℓ = i E τ − k ω,ζ,ρ (cid:2) N T iρ ( k ) (cid:3)i = ∞ X m =0 X k ∈ [ mρ, ( m +1) ρ ) ∞ X i =1 i X j =0 E h ℓ = i E τ − k ω,ζ,ρ h N [ T jρ ,T ( j +1) ρ ) ( k ) ii . (68)Note that, given m > j ≥ k ∈ [ mρ, ( m + 1) ρ ), it holds N [ T jρ ,T ( j +1) ρ ) ( k ) = 0, hence inthe last expression of (68) we can restrict to 0 ≤ m ≤ j ≤ i . Moreover note that (cf. (53)) E τ − k ω,ζ,ρ h N [ T jρ ,T ( j +1) ρ ) ( k ) i ≤ ε (1 − ε ) E τ − k ω,ρ h N [ T jρ ,T ( j +1) ρ ) ( k ) i . (69)Consider then the case k ∈ [ mρ, ( m + 1) ρ ) with 0 ≤ m ≤ j ≤ i . Note that X ρT jρ ∈ [ jρ, ( j + 1) ρ ) due to the maximal length of the jump. Fix y ∈ [ jρ, ( j + 1) ρ ). Then, for anyenvironment ω , we have E ω,ρy h N [ T jρ ,T ( j +1) ρ ) ( k ) i = E ω,ρy h N T ( j +1) ρ ( k ) i ≤ ( g τ k ω (0) if j = m ,g τ jρ ω ( jρ − k ) if j > m . (70)Indeed, consider first the case j > m . Then k < y and by Prop. 3.10 E ω,ρy h N T ( j +1) ρ ( k ) i ≤ E ω,ρy [ N T ∞ ( k )] = E τ y ω,ρ [ N T ∞ ( k − y )] ≤ g τ y ω ( y − k ) (71)Write y = jρ + ℓ and ω ′ := τ jρ ω . Then we have g τ y ω ( y − k ) = g τ ℓ ω ′ ( jρ − k + ℓ ) ≤ g ω ′ ( jρ − k ) = g τ jρ ω ( jρ − k )(in the last step we have used (39)). This proves (70) for j > m . If j = m we bound (bythe Markov property at the first visit of k ) E ω,ρy h N T ( j +1) ρ ( k ) i ≤ E ω,ρy [ N T ∞ ( k )] ≤ E ω,ρk [ N T ∞ ( k )] = E τ k ω,ρ [ N T ∞ (0)] . At this point (70) for j = m follows from Prop. 3.10.The above bound (70), the Markov property and (69) imply E τ − k ω,ζ,ρ h N [ T jρ ,T ( j +1) ρ ) ( k ) i ≤ ε (1 − ε ) · ( g ω (0) if j = m ,g τ jρ − k ω ( jρ − k ) if j > m . (72)Coming back to (68) and due to the above observations we can bound H + ( ω ) ≤ ∞ X k =0 EE τ − k ω,ζ,ρ h N T ℓ ρ ( k ) i ≤ A ( ω ) + B ( ω ) , (73) where (distinguishing the cases m = j and m < j ) A ( ω ) := ∞ X i =1 i X j =0 X k ∈ [ jρ, ( j +1) ρ ) E [ ℓ = i ] g ω (0) = ρ ( E ( ℓ ) + 1) g ω (0) , (74) B ( ω ) := ∞ X i =1 i X j =1 j − X m =0 X k ∈ [ mρ, ( m +1) ρ ) E [ ℓ = i ] g τ jρ − k ω ( jρ − k ) . For what concerns B ( ω ) observe that P ∞ i = j E [ ℓ = i ] = (1 − ε ) j − , hence B ( ω ) ≤ ∞ X j =1 (1 − ε ) j − j − X m =0 X k ∈ [ mρ, ( m +1) ρ ) g τ jρ − k ω ( jρ − k )= ∞ X j =1 (1 − ε ) j − X k ∈ [0 ,jρ ) g τ jρ − k ω ( jρ − k ) = ∞ X j =1 (1 − ε ) j − jρ X h =1 g τ h ω ( h ) . (75)Since, by (39), g τ h ω ( h ) ≤ g ω (0), we get that B ( ω ) ≤ P ∞ j =1 (1 − ε ) j − jρg ω (0) = Cρg ω (0).Combining this estimate with (74) we conclude that H + ( ω ) ≤ Cρg ω (0). Coming back to(67) and (66) we have d Q ρ d P ( ω ) ≤ Cπ (0) X k ≤ e − λx − k F ∗ ( τ − k ω ) + Cg ω (0) ≤ ˆ Cπ (0) X k ≤ e − λx − k F ∗ ( τ − k ω ) . (76)Since x a ( τ − k ω ) = x a − k ( ω ) − x − k ( ω ), by definition of F ∗ (and setting r = i − k ) we canwrite X k ≤ e − λx − k F ∗ ( τ − k ω ) = K X k ≤ X i ≥ ( i + 1)e − λx i − k +(1 − λ )( x i − k +1 − x i − k ) = K X r ≥ e − λx r +(1 − λ )( x r +1 − x r ) ( r + 1)( r + 2)2 . (77)As byproduct of (76) and (77) we get (63).Finally, by using (37) and that x j ≥ jd for j ≥ 0, we can bound E [ F ] ≤ C P j ≥ ( j +2) e − λj E [e (1 − λ ) x ]. Since by assumption E [e (1 − λ ) x ] < ∞ , we conclude that E [ F ] < ∞ . (cid:3) Uniform lower bound for d Q ρ /d P . We remark that, following the proof of Propo-sition 3.4 in [8], we could easily obtain a lower bound on d Q ρ /d P which is independent of ρ , but which would in principle depend on the particular argument ω . Here we will domore: We will exhibit a lower bound that is uniform in both ρ and ω (see Corollary 5.8below).For fixed ω ∈ Ω, we denote by Q ωn the empirical measure at time n for the environmentviewed from the ρ –walker. More precisely, Q ωn is a random probability measure on Ωdefined as Q ωn := 1 n n X j =1 δ τ Xρj ω . D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 23 Averaging over the paths of the walk we obtain the probability E ω,ρ [ Q ωn ( · )]. For fixed ω ∈ Ω, we define another probability measure on Ω, given by R ωn := 1 m ( n ) m ( n ) X j =1 δ τ j ω , where m ( n ) := n · v Xρ / v Xρ is the positive limiting speed of the truncated randomwalk given in (58) (we are omitting the dependence on λ ; the 1 / R ωn and E ω,ρ [ Q ωn ( · )] can be thought of as random variables on (Ω , P )with values in P (Ω), the space of probability measures on Ω endowed with the weaktopology. Note also that P , Q ρ ∈ P (Ω). Furthermore, Q ωn can be thought of as a randomvariable on the probability space (Ω × Z N , P ⊗ P ω,ρ ) with values in P (Ω). Proposition 5.6. For P –almost every ω ∈ Ω we have that R ωn → P and E ω,ρ [ Q ωn ( · )] → Q ρ weakly in P (Ω) . Moreover, P ⊗ P ω,ρ -a.s., we have that Q ωn → Q ρ weakly in P (Ω) .Proof. The a.s. convergence of R ωn to P comes directly from the ergodicity of P with respectto shifts.We claim that Q ωn → Q ρ weakly in P (Ω), Q ρ ⊗ P ω,ρ -a.s. This follows from Birkhoff’sergodic theorem applied to the Markov chain τ X ρn ω starting from the ergodic distribution Q ρ (cf. Lemma 5.1). As already observed after equation (62), P is absolutely continuousto Q ρ . Hence, due to the above claim, Q ωn → Q ρ weakly in P (Ω) also P ⊗ P ω,ρ -a.s.Finally, the last a.s. convergence and the dominated convergence theorem imply that E ω,ρ [ Q ωn ( · )] → Q ρ weakly in P (Ω), P –a.s. (cid:3) Lemma 5.7. There exists γ > , depending neither on ω nor on ρ , such that the followingholds: For P -almost every ω , there exists an ¯ n ω such that, ∀ n ≥ ¯ n ω , E ω,ρ [ Q ωn ( τ k ω )] R ωn ( τ k ω ) > γ , k = 1 , ..., m ( n ) . Proof. For all k = 1 , ..., m ( n ), we have E ω,ρ [ Q ωn ( · )] ≥ n P ω,ρ ( ∃ j ≤ n : X j = k ) δ τ k ω . (78)We claim that, for n big enough and k = 1 , ..., m ( n ), it holds P ω,ρ ( ∃ j ≤ n : X j = k ) ≥ ε, (79)where ε > P ω,ρ ( ∃ j ≤ n : X j = k ) ≥ P ω,ρ ( X T k = k, T k ≤ n ) ≥ P ω,ρ ( X T k = k ) − P ω,ρ ( T k > n ) ≥ ε − P ω,ρ ( T m ( n ) > n ) , where in the last line we have used Lemma 3.15. On the other hand, we also know, bythe definition of the limiting speed, that for almost every ω ∈ Ω, there exists an ¯ n ω suchthat, ∀ n > ¯ n ω , P ω,ρ ( T m ( n ) > n ) ≤ P ω,ρ ( X n < m ( n )) < ε . This completes the proof of theclaim.Hence, putting together (78) and (79), for all n ≥ ¯ n ω and k = 1 , ..., m ( n ), we have E ω,ρ [ Q ωn ( · )] ≥ εn δ τ k ω . On the other hand, by definition, R ωn ( τ k ω ) = m ( n ) for all k = 1 , ..., m ( n ) and for P –a.a. ω (since periodic environments have P –measure zero by Assumption (A3)). It then followsthat, for all k = 1 , ..., m ( n ) and for P –a.a. ω , E ω,ρ [ Q ωn ( τ k ω )] R ωn ( τ k ω ) ≥ εn m ( n ) = εv Xρ ≥ εc γ > , (80)where c is from (59). Note that γ does not depend on ω . (cid:3) We finally need to show that the lower bound extends also to the Radon-Nikodymderivative of the limiting measures. Corollary 5.8. The Radon-Nikodym derivative d Q ρ d P is uniformly bounded from below: d Q ρ d P ≥ γ , where γ is from (80) .Proof. Take any f ≥ R ωn hassupport in { τ k ω : k = 1 , ..., m ( n ) } guarantee that, for all n large enough, E ω,ρ [ Q ωn ( f )] ≥ γR ωn ( f ) for P –a.e. ω . Passing to the limit n → ∞ , and observing that, by Proposition 5.6, E ω,ρ [ Q ωn ( f )] → Q ρ ( f )and R ωn ( f ) → P ( f ) for P –a.e. ω , we have that Q ρ ( f ) ≥ γ P ( f ). The claim follows from thearbitrariness of f . (cid:3) The weak limit of Q ρ as ρ → ∞ . Recall the definition of the function F given in(64) and of the constant γ given in Corollary 5.8. Lemma 5.9. Suppose E (cid:2) e (1 − λ ) x (cid:3) < ∞ . Then the following holds: (i) The family of probability measures ( Q ρ ) ρ ∈ N + is tight; (ii) Any subsequential limit Q ∞ of ( Q ρ ) ρ ∈ N + is absolutely continuous to P and < γ ≤ d Q ∞ d P ≤ F P –a.s.Proof. For proving part (i), fix an increasing sequence of compact subsets K n exhaustingall of Ω. Thanks to Proposition 5.4 we have Q ρ ( K cn ) = E (cid:20) d Q ρ d P K cn (cid:21) ≤ E (cid:2) F K cn (cid:3) . Setting f n := F K cn we have that 0 ≤ f n ≤ F and f n ( ω ) → ε > Q ρ ( K cn ) ≤ ε eventually in n , hence the tightness.We turn now to (ii). By Prohorov’s Theorem there exists a sequence ρ k → ∞ such that Q ρ k converges weakly to some probability measure Q ∞ . We want to prove the absolutecontinuity of Q ∞ with respect to P . To this aim fix a measurable set A ⊂ Ω with P ( A ) = 0.We need to show that Q ∞ ( A ) = 0. Due to [4][Thm. 1.1], for each integer m ≥ G m with A ⊂ G m ⊂ Ω and P ( G m ) = P ( G m \ A ) ≤ /m . Due to thePortmanteau Theorem (cf. [4][Thm. 2.1]) we conclude that Q ∞ ( A ) ≤ Q ∞ ( G m ) ≤ lim inf ρ →∞ Q ρ ( G m ) = lim inf ρ →∞ E (cid:20) d Q ρ d P G m (cid:21) ≤ E [ F G m ] . (81)Since the sequence of subsets { G m } m ≥ can be taken decreasing and since F ∈ L ( P )(cf. Prop. 5.4), we derive that the r.h.s. of (81) goes to zero as m → ∞ by the dominatedconvergence theorem. This proves that Q ∞ ( A ) = 0, thus implying that Q ∞ ≪ P . D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 25 Let us prove that d Q ∞ d P ≤ F , P –a.s. To this aim we take G ⊂ Ω open. By the Port-manteau theorem, we have lim inf ρ →∞ Q ρ ( G ) ≥ Q ∞ ( G ). On the other hand, Q ρ ( G ) = E [ d Q ρ d P G ] ≤ E [ F G ]. Hence E [ F G ] − Q ∞ ( G ) = E h(cid:16) F − d Q ∞ d P (cid:17) G i ≥ G open. Suppose by contradiction that P ( A ) > A := { F − d Q ∞ d P < } .By [4][Thm. 1.1] there exists a decreasing sequence ( G m ) m ≥ of open subsets such that A ⊂ G m and P ( G m \ A ) ≤ /m for any m . The last bound implies that G m \ A → L ( P )as m → ∞ , hence at the cost of extracting a subsequence we can assume that G m \ A → P –a.s. as m → ∞ . By applying now the dominated convergence theorem we conclude thatlim m →∞ E [( F − d Q ∞ d P ) G m ] = E [( F − d Q ∞ d P ) A ]. By definition of A and since P ( A ) > 0, itmust be E [( F − d Q ∞ d P ) A ] < 0. On the other hand, due to (82), E [( F − d Q ∞ d P ) G m ] ≥ d Q ∞ d P ≥ γ , P –a.s., follows similar arguments. In particular, by thePortmanteau theorem, one gets that γ P ( C ) ≤ Q ∞ ( C ) for all C ⊂ Ω closed. Moreover, by[4][Thm. 1.1], for any A ⊂ Ω Borel there exists an increasing sequence ( C m ) m ≥ of closedsets such that C m ⊂ A and P ( A \ C m ) ≤ /m . (cid:3) Proof of Theorem 1: transience to the right By the discussion at the end of Section 3, it is enough to show the a.s. transience tothe right of X ∞ n and X ∞ t . Since the former is the jump chain associated to the latter, weonly need to derive the a.s. transience to the right of X ∞ n . To this aim, it is sufficientto show that, for any m ∈ N , there exists some n ( m, ω ) < ∞ such that X ∞ n > m for all n ≥ n ( m, ω ).First of all notice that, by Proposition 3.10, for P –almost every ω ∈ Ω and i ∈ Z wehave E ω, ∞ i [ N ∞ (( −∞ , i ])] ≤ ∞ X k =0 g τ i ω ( k )= K (cid:16) ∞ X k =0 K π ( − k )[ τ i ω ] (cid:17) · (cid:16) ∞ X j =0 e − λx j ( τ i ω )+(1 − λ )( x j +1 ( τ i ω ) − x j ( τ i ω )) (cid:17) , which is P − almost surely finite (see (37) and the discussion after Prop. 3.10). Hence P ω, ∞ i ( N ∞ (( −∞ , i ]) < ∞ ) = 1 . (83)Now fix m ∈ N and consider T m , the first time the random walk is larger or equal than m . Applying the Markov property at time T m and using (83) one gets the claim.7. Proof of Theorem 1: The ballistic regime In this section we assume that E [e (1 − λ ) x ] < + ∞ and that u : R × R → R is continu-ous. Recall that ( Y ) t ≥ and ( Y n ) n ≥ denote the continuous time Mott random walk andthe associated jump process, respectively. Recall also the definition of the Markov chains( X ∞ t ) t ≥ and ( X ∞ n ) n ∈ N , given in Section 3 and that P ρQ is the law of the process envi-ronment viewed from the ρ –walker ( τ X ρn ω ) n ∈ N when started with some initial distribution Q .Given ρ ∈ N + ∪ { + ∞} , by writing ( X ρn ) n ∈ N as a functional of ( τ X ρn ω ) n ∈ N and using theergodicity of Q ρ (cf. Lemma 5.1 and Proposition 5.3) we get that the asymptotic velocity of ( X ρn ) n ≥ exists P ρ Q ρ –a.s. and therefore P ρ P –a.s. since Q ρ and P are mutually absolutelycontinuous: v X ρ ( λ ) := lim n →∞ X ρn n P ρ Q ρ –a.s. and P ρ P –a.s. (84)Moreover, v X ρ ( λ ) does not depend on ω and can be characterized as v X ρ ( λ ) := E ρ (cid:2) E ω,ρ [ X ] (cid:3) = E ρ h X m ∈ Z m P ω,ρ ( X = m ) i , ∀ ρ ∈ N + ∪ { + ∞} . (85)Here, E ρ denotes the expectation with respect to Q ρ . Recall that for ρ < ∞ we have alsoan alternative representation for v X ρ ( λ ) (see Proposition 4.5).We now prove that lim ρ →∞ v X ρ ( λ ) = v X ∞ ( λ ) . (86)By the exponential decay of the jump probabilities (see (24)), for all δ > m ∈ N such that, for all ρ , X | m | >m | m | P ω,ρ ( X = m ) < δ P -a.s.We now observe that, for ρ > | m | > 0, we have P ω,ρ ( X = m ) = P ω, ∞ ( X = m ) = c ,m ( ω ) P k ∈ Z c ,k ( ω ) , (87)and the r.h.s. of (87) is continuous in ω due to the continuity assumption on u and since k c ,k ( · ) k ∞ ≤ e − (1 − λ ) dk + k u k ∞ . Since Q ρ w −→ Q ∞ , it is now simple to get (86).Finally, we also have that v X ∞ ( λ ) ∈ [ c , c ] because of the limit (86) and since, byProposition 4.5, v Xρ ( λ ) ∈ ( c , c ) for suitable strictly positive constants c , c .By the previous observations and by the second identity in (16), we also obtain thatthe limit v Y ( λ ) := lim n →∞ Y n n (88)exists P ∞ P –a.s. and equals E [ Z ] v X ∞ ( λ ). As a consequence, v Y ( λ ) is deterministic, finiteand strictly positive.By a suitable time change we can recover the LLN for ( X ∞ t ) t ≥ from the LLN for( X ∞ n ) n ≥ as follows. By enlarging the probability space (Ω N , P ∞ Q ∞ ) with a product space,we introduce a sequence of i.i.d. exponential random variables ( β n ) n ≥ of mean one, allindependent from the process environment viewed from the ∞ –walker ( τ X ∞ n ω ) n ∈ N . Wecall (Ω N ⊗ R N + , ¯ P ∞ Q ∞ ) the resulting probability space. Note that ¯ P ∞ Q ∞ is stationary andergodic with respect to shifts. On (Ω N ⊗ R N + , ¯ P ∞ Q ∞ ) we define the random variable S n := n − X k =0 β k r ( τ X ∞ k ω ) , r ( ω ) := π ∞ (0)[ ω ] = X k ∈ Z c ,k ( ω ) . We note that r ( ω ) coincides with r λ ( ω ) of Section 2. By the ergodicity of ¯ P ∞ Q ∞ we havelim n →∞ S n n = E ∞ (cid:2) /r (cid:3) ¯ P ∞ Q ∞ –a.s. (89) D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 27 Since, by Proposition 5.3, Q ∞ ≪ P and d Q ∞ d P ≤ F with F defined in (64), using Lemma3.4, Assumption (A4) and the hypothesis E [e (1 − λ ) Z ] < + ∞ we get0 < E ∞ (cid:2) /r (cid:3) ≤ K E h π (0) r ∞ X j =0 ( j + 2) e − λx j +(1 − λ )( x j +1 − x j ) i ≤ K ′ ∞ X j =0 ( j + 2) e − λd E [e (1 − λ ) Z ] < + ∞ . (90)For any t ≥ n ( t ) on (Ω N ⊗ R N + , ¯ P ∞ Q ∞ ) as the only integer n such that S n ≤ t First we point out that it will be sufficient to prove that v X ∞ ( λ ) = 0 a.s., for P –a.e.realization of the environment ω : Recall the identities (15) and (16) of Section 3. ByAssumptions (A1) and (A2), lim i →∞ ψ ( i ) /i = E [ Z ] < ∞ , P –a.s.. On the other hand, asproved in Section 6, the random walks X ∞ n and X ∞ t are a.s. transient to the right. As abyproduct, due to (15) and (16), we have v Y ( λ ) = 0, v Y ( λ ) = 0 whenever v X ∞ ( λ ) = 0, v X ∞ ( λ ) = 0, respectively. But we also have that v X ∞ ( λ ) = 0 implies v X ∞ ( λ ) = 0. Indeed,the continuous time random walk ( X ∞ t ) t ≥ is obtained from the discrete time random walk( X ∞ n ) n ≥ by the rule that, when site k is reached, X ∞ remains at k for an exponentialtime with parameter r λk ( ω ). Since sup k ∈ Z ,ω ∈ Ω r λk ( ω ) =: C < ∞ (cf. Section 2), we canspeed up X ∞ by replacing all parameters r λk ( ω ) by C . The resulting random walk can be realized as t X ∞ n ( t ) where (cid:0) n ( t ) (cid:1) t ≥ is a Poisson process with intensity C . Hence, itsvelocity is zero whenever v X ∞ ( λ ) = 0.We first show in Proposition 8.1 a sufficient condition for v X ∞ ( λ ) = 0. In Corollary 8.7we prove that this condition is equivalent to the hypothesis (5) of Theorem 1–(iii) and inCorollary 8.8 we discuss some stronger conditions corresponding to the last statement inTheorem 1–(iii). Proposition 8.1. Suppose that E h (cid:16) sup z ≤ P ω, ∞ z ( X ≥ (cid:17) − i = ∞ . (96) Then v X ∞ ( λ ) = 0 . A basic tool in the proof of the above proposition will be the following coupling: Lemma 8.2 (Quantile coupling) . For a distribution function G and a value u ∈ [0 , ,define the function φ ( G, u ) := inf { x ∈ R : G ( x ) > u } . Let F and F ′ be two distribution functions such that F ( x ) ≤ F ′ ( x ) for all x ∈ R . Take U to be a uniform random variable on [0 , and let Y := φ ( F, U ) and Y ′ := φ ( F ′ , U ) . Then Y is distributed according to F , Y ′ is distributed according to F ′ and Y ≥ Y ′ almostsurely. The proof of the above fact can be found in [27]. Usually, as in [27], the quantilecoupling is defined with φ q ( G, u ) instead of φ ( G, u ), where φ q ( G, u ) is the quantile function φ q ( G, u ) := inf { x ∈ R : G ( x ) ≥ u } . One can easily prove that φ ( G, U ) = φ q ( G, U ) a.s. Proof of Proposition 8.1. Call F ξ the distribution function of the random variable ξ := L + G , where L ∈ N is some constant such thate u max − u min e − (1 − λ ) dL − e − (1 − λ )d < , (97)and G is a geometric random variable with parameter γ = 1 − e − (1 − λ ) d . Note that givenan integer a it holds1 − F ξ ( a ) = ( a − L ≤ , (1 − γ ) a − L = e − (1 − λ ) d ( a − L ) if a − L ≥ . (98)In particular, given an integer M ≥ L + 2, due to (97) we havee u max − u min e − (1 − λ ) d ( M − − e − (1 − λ )d < e − (1 − λ ) d ( M − − L ) = 1 − F ξ ( M − . (99)We will now inductively construct a sequence of probability spaces (Ω × Z N × [0 , n , P ( n ) ),on which we will define some random variables.STEP 1. We first consider the space Ω × Z N × [0 , ω, ¯ x, u ).We introduce a probability P (1) on Ω × Z N × [0 , 1] by the following rules. The marginalof P (1) on Ω is P , its marginal on [0 , 1] is the uniform distribution and, under P (1) , thecoordinate functions ( ω, ¯ x, u ) ω and ( ω, ¯ x, u ) u are independent random variables.Finally, we require that P (1) ( X (1) · ∈ A | ω, u ) = P ω, ∞ ( X · ∈ A | X T = φ ( F (1) ω , u )) (100) D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 29 for any measurable set A ⊆ Z N , where ( X (1) n ) n ∈ N is the second–coordinate function( ω, ¯ x, u ) ¯ x and F (1) ω ( y ) = P ω, ∞ ( X T ≤ y ) , T = inf { n ∈ N : X ∞ n > } . From now on we consider the space Ω × Z N × [0 , 1] endowed with the probability P (1) .It is convenient to introduce the random variables U , ξ , W defined as follows : U ( ω, ¯ x, u ) := u , ξ ( ω, ¯ x, u ) := φ ( F ξ , u ) , W ( ω, ¯ x, u ) := φ ( F (1) ω , u ) . Note that, by the quantile coupling (cf. Lemma 8.2), ξ is distributed as ξ and W under P (1) ( · | ω ) is distributed as X ∞ T under P ω, ∞ .The interpretation to keep in mind is the following: ( X (1) n ) n ∈ N plays the role of ourinitial random walk in environment ω ; W is the overshoot at time T , i.e. how far from0 the random walk will land the first time it jumps beyond the point 0; ξ is a positiverandom variable that dominates W (see Claim 8.4) and that is distributed like ξ . Claim 8.3. For any integer M ≥ it holds P ω, ∞ ( X T ≥ M ) ≤ sup z ≤ P ω, ∞ z ( X ≥ M | X ≥ . (101) Proof of Claim 8.3. Given j ≥ z , z , . . . , z j − ≤ E ( z , z , . . . , z j − ) the event { X ∞ = z , . . . , X ∞ j − = z j − } . Note that, by the Markovproperty, P ω, ∞ ( X j ≥ M, E ( z , . . . , z j − )) P ω, ∞ ( X j ≥ , E ( z , . . . , z j − )) = P ω, ∞ z j − ( X ≥ M ) P ω, ∞ z j − ( X ≥ 1) = P ω, ∞ z j − ( X ≥ M | X ≥ . By the above identity we can write P ω, ∞ ( X T ≥ M )= ∞ X j =1 X z ,...,z j − ≤ P ω, ∞ ( X j ≥ M | X j ≥ , E ( z , . . . , z j − )) P ω, ∞ ( X j ≥ , E ( z , . . . , z j − )) ≤ sup z ≤ P ω, ∞ z ( X ≥ M | X ≥ ∞ X j =1 X z ,...,z j − ≤ P ω, ∞ ( X j ≥ , E ( z , . . . , z j − )) ≤ sup z ≤ P ω, ∞ z ( X ≥ M | X ≥ . (cid:3) Claim 8.4. The following holds: (i) P (1) ( ξ ≥ W ) = 1 ; (ii) ξ is independent of ω under P (1) ; (iii) P (1) ( X (1) · ∈ B | ω ) = P ω, ∞ ( X · ∈ B ) for each measurable set B ⊂ Z N .Proof of Claim 8.4. In order to show (i), we just have to prove that F (1) ω ( x ) ≤ F ξ ( x ) forall ω ∈ Ω and x ∈ R (in fact, it is enough to prove it for all x ∈ N ) thanks to Lemma8.2. To this aim, recall the definition of L (see (97)) and notice that for all ω ∈ Ω and all We will denote the first–coordinate function again by ω , without introducing a new symbol. integers M ≥ L + 2, one has1 − F (1) ω ( M − 1) = P ω, ∞ ( X T ≥ M ) ≤ sup z ≤ P ω, ∞ z ( X ≥ M | X ≥ z ≤ P j ≥ M e − (1 − λ )( x j − x z )+ u ( E z ,E j ) P j ≥ e − (1 − λ )( x j − x z )+ u ( E z ,E j ) ≤ e u max − u min sup z ≤ P j ≥ M e − (1 − λ )( x j − x z ) e − (1 − λ )( x − x z ) = e u max − u min X j ≥ M e − (1 − λ )( x j − x ) ≤ e u max − u min X j ≥ M e − (1 − λ ) d ( j − = e u max − u min e − (1 − λ ) d ( M − − e − (1 − λ ) d ≤ − F ξ ( M − , (102)where in the first line we have used Claim 8.3 and in the last bound we have used (99) andthe fact that M ≥ L + 2. This proves that F (1) ω ( a ) ≥ F ξ ( a ) for all a ∈ N with a ≥ L + 1.The same inequality trivially holds also for a ≤ L since in this case F ξ ( a ) = 0 (because ξ > L ).Part (ii) is clear since ξ is determined only by U , while U and ω are independent byconstruction.For part (iii) take some measurable set B ⊂ Z N and notice that (recalling (100) and theindependence of ω and U − P (1) ( X (1) · ∈ B | ω ) = Z [0 , P (1) ( X (1) · ∈ B | ω, U = u ) P (1) ( U ∈ d u )= Z [0 , P ω, ∞ ( X · ∈ B | X T = φ ( F (1) ω , u )) d u = ∞ X j =1 P ω, ∞ ( X · ∈ B | X T = j ) P ω, ∞ ( X T = j ) = P ω, ∞ ( X · ∈ B ) . (cid:3) STEP k+1. Suppose now we have achieved our construction up to step k . In particular,we have built the probability P ( k ) on the space Ω × Z N × [0 , k and several random variableson (Ω × Z N × [0 , k , P ( k ) ) that we list: • U , . . . , U k are independent and uniformly distributed random variables such that( U , . . . , U k ) is the projection function on [0 , k ; • ξ , . . . , ξ k is defined as ξ j = φ ( F ξ , U j ), j = 1 , . . . , k ; • ( X ( k ) n ) n ≥ , defined as the projection function on Z N , whose law under P ( k ) ( ·| ω ) is P ω, ∞ ; • W , W , . . . , W k such that P ( k ) ( ξ i ≥ W i for all i = 1 , . . . , k ) = 1.We introduce a probability P ( k +1) on Ω × Z N × [0 , k +1 by the following rules. Themarginal of P ( k +1) on Ω is P , its marginal on [0 , k +1 is the uniform distribution and,under P ( k +1) , the projection functions ( ω, ¯ x, u , . . . , u k +1 ) ω and ( ω, ¯ x, u , . . . , u k +1 ) ( u , . . . , u k +1 ) are independent random variables. Finally, we require that P ( k +1) (cid:0) X ( k +1) · ∈ A | ω, u , . . . , u k , u k +1 (cid:1) = P ( k ) (cid:16) X ( k ) · ∈ A | ω, u , . . . , u k , X ( k ) T k +1 = ξ + ... + ξ k + φ ( F ( k +1) ω,u ,...,u k , u k +1 ) (cid:17) (103) D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 31 X T k X T k +1 ξ + ... + ξ k − ξ + ... + ξ k ξ + ... + ξ k +1 w ( u k +1 ) ξ ( u k +1 ) Figure 1. T k +1 is the first time the random walk overjumps the point ξ + ... + ξ k . The overshoot w ( u k +1 ) is dominated by ξ ( u k +1 ) by construction. for any measurable set A ⊆ Z N , where F ( k +1) ω,u ,...,u k ( y ) := P ( k ) ( X ( k ) T k +1 ≤ ξ + ... + ξ k + y | ω, u , . . . , u k ) ,T k +1 := inf { n ∈ N : X ( k ) n > ξ + ... + ξ k } . Note that T k +1 is a random variable on (Ω × Z N × [0 , k , P ( k ) ). We stress that theconditional probability in the r.h.s. of (103) has to be thought of as the regular conditionalprobability P ( k ) (cid:0) · | ω, u , . . . , u k (cid:1) further conditioned on the event { X ( k ) T k +1 = ξ + ... + ξ k + φ ( F ( k +1) ω,u ,...,u k , u k +1 ) } . Claim 8.5. The marginal of P ( k +1) on Ω × Z N × [0 , k is exactly P ( k ) .Proof of Claim 8.5. Since the marginal of P ( k +1) along the coordinate u k +1 is the uniformdistribution, by integrating (103) over u k +1 , we get P ( k +1) (cid:0) X ( k +1) · ∈ A | ω, u , . . . , u k (cid:1) = ∞ X j =1 P ( k ) (cid:0) X ( k ) · ∈ A | ω, u , . . . , u k , X ( k ) T k +1 = ξ + ... + ξ k + j (cid:1) Z (cid:0) φ ( F ( k +1) ω,u ,...,u k , u ) = j (cid:1) du . (104)Above we have used Lemma 8.2 to deduce that φ ( F ( k +1) ω,u ,...,u k , u ) has integer values. Apply-ing again Lemma 8.2 and the definition of F ( k +1) ω,u ,...,u k we have Z (cid:0) φ ( F ( k +1) ω,u ,...,u k , u ) = j (cid:1) du = P ( k ) ( X ( k ) T k +1 = ξ + ... + ξ k + j | ω, u , . . . , u k ) . (105)Plugging (105) into (104), we get P ( k +1) (cid:0) X ( k +1) · ∈ A | ω, u , . . . , u k (cid:1) = P ( k ) (cid:0) X ( k ) · ∈ A | ω, u , . . . , u k (cid:1) . (106)On the other hand, the projections of P ( k +1) and P ( k ) on Ω × [0 , k , i.e. along the coordi-nates ω, u , . . . , u k , are equal by construction, thus concluding the proof of our claim. (cid:3) Due to the above claim, any random variable Y defined on (Ω × Z N × [0 , k , P ( k ) ) canbe thought of as a random variable on (Ω × Z N × [0 , k +1 , P ( k +1) ), by considering the map( ω, ¯ x, u , . . . , u k , u k +1 ) Y ( ω, ¯ x, u , . . . , u k ). With some abuse of notation, we denote by Y also the last random variable.As a consequence, U , . . . , U k , ξ , . . . , ξ k , W , . . . , W k can be thought as random vari-ables on (Ω × Z N × [0 , k +1 , P ( k +1) ). Finally, we introduce the new random variables U k +1 , ξ k +1 , W k +1 on (Ω × Z N × [0 , k +1 , P ( k +1) ) defined as U k +1 ( ω, ¯ x, u , . . . , u k +1 ) := u k +1 ,ξ k +1 ( ω, ¯ x, u , . . . , u k +1 ) := φ ( F ξ , u k +1 ) ,W k +1 ( ω, ¯ x, u , . . . , u k +1 ) := φ ( F ( k +1) ω,u ,...,u k , u k +1 ) . The interpretation is similar as in STEP 1: W k +1 is the overshoot at time T k +1 , i.e. howfar from ξ + ... + ξ k the random walk will land the first time it jumps beyond that point; ξ k +1 is a positive random variable that dominates W k +1 (see Claim 8.6) and that is distributedas ξ . Claim 8.6. The following three facts hold true: (i) P ( k +1) ( ξ k +1 ≥ W k +1 ) = 1 ; (ii) ξ k +1 is independent of ω, U , ..., U k under P ( k +1) ; (iii) For each measurable set B ⊂ Z N , P ( k +1) ( X ( k +1) · ∈ B | ω ) = P ω, ∞ ( X · ∈ B ) . Proof of Claim 8.6. The three facts can be proved in a similar way as Claim 8.4. We givethe proof for completeness.For Part (i) we want to show that F ( k +1) ω,u ,...,u k ( M − ≥ F ξ ( M − 1) for all M ≥ L + 2,with M ∈ N . In fact, as for Claim 8.4, this inequality can easily be extended to all M ∈ N and the conclusion follows.First of all we notice that, by iteratively applying (103) and using Claim 8.4–(iii), wehave 1 − F ( k +1) ω,u ,...,u k ( M − 1) = P ( k ) ( X ( k ) T k +1 ≥ ξ + ... + ξ k + M | ω, u , . . . , u k )= P ω, ∞ ( X inf { n : X n >ξ ( u )+ ... + ξ ( u k ) } ≥ ξ ( u ) + ... + ξ ( u k ) + M | D k ) , (107)where we have used the shortened notation ξ ( u ) := φ ( F ξ , u ) and D k is the event D k : = { X T = φ ( F (1) ω , u ) , X inf { n : X n >ξ ( u ) } = ξ ( u ) + φ ( F (2) ω,u , u ) , ...,X inf { n : X n >ξ ( u )+ ... + ξ ( u k − ) } = ξ ( u ) + ... + ξ ( u k − ) + φ ( F ( k ) ω,u ,...,u k − , u k ) } . For convenience we call D ′ k := { X inf { n : X n >ξ ( u )+ ... + ξ ( u k − ) } = y k } ,y k := y k ( u , ..., u k ) := ξ ( u ) + ... + ξ ( u k − ) + φ ( F ( k ) ω,u ,...,u k − , u k ) w k = w k ( u , ..., u k ) := φ ( F ( k ) ω,u ,...,u k − , u k ) . We also note that ξ ( u k ) ≥ w k P ( k ) –a.s. (see the list of properties at the beginning ofSTEP k + 1). Coming back to (107), by using the strong Markov Property, we obtain (see D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 33 also the proof of Claim 8.3) P ( k ) ( X ( k ) T k +1 ≥ ξ + ... + ξ k + M | ω, u , . . . , u k )= P ω, ∞ (cid:0) X inf { n : X n >ξ ( u )+ ... + ξ ( u k ) } ≥ ξ ( u ) + ... + ξ ( u k ) + M | D ′ k (cid:1) = P τ yk ω, ∞ ( X inf { n : X n >ξ ( u k ) − w k } ≥ ξ ( u k ) − w k + M )= X i ∈ N + P τ yk ω, ∞ ( X i ≥ ξ ( u k ) − w k + M | inf { n : X n > ξ ( u k ) − w k } = i ) × P τ yk ω, ∞ (inf { n : X n > ξ ( u k ) } = i ) ≤ sup z ≤ ξ ( u k ) − w k P τ yk ω, ∞ z ( X ≥ M | X ≥ . (108)The last inequality follows by conditioning to the position of the random walk at time i − 1. Knowing this, we can proceed as in (102) getting that the last term in (108) isbounded from above by 1 − F ξ ( M − ξ k +1 . Finally, we prove Part (iii). Sincethe projections of P ( k +1) and of P ( k ) on [0 , k , i.e. along the coordinates u , . . . , u k ,are both the uniform distribution on [0 , k , integrating (106) over u , . . . , u k we get P ( k +1) (cid:0) X ( k +1) · ∈ A | ω (cid:1) = P ( k ) (cid:0) X ( k ) · ∈ A | ω (cid:1) . The claim then follows by the inductionhypothesis (see the discussion at the beginning of STEP k + 1). (cid:3) Due to the results discussed above, the list of properties at the beginning of STEP k + 1is valid also for P ( k +1) .STEP + ∞ : By the Ionescu-Tulcea Extension Theorem, there exists a measure P ( ∞ ) onthe space Ω × Z N × [0 , N , random variables ξ , ξ , ... , W , W , ... , T , T , ... and a randomwalk ( X ( ∞ ) n ) n ∈ N , such that: For all measurable A ⊂ Ω, P ( ∞ ) ( ω ∈ A ) = P ( ω ∈ A ); the ξ k ’s are i.i.d., distributed like ξ and independent of ω ; P ( ∞ ) ( X ( ∞ ) T k = ξ + ... + ξ k − + W k ) = 1; P ( ∞ ) ( ξ k ≥ W k ) = 1; for all measurable B ⊂ Z N , P ( ∞ ) (( X ( ∞ ) n ) n ∈ N ∈ B | ω ) = P ω (( X ( ∞ ) n ) n ∈ N ∈ B ).We are now ready to finish the proof. Notice that, under P ( ∞ ) ( · | ω ), the differences( T k +1 − T k ) k =0 , ,... have a rather complicated structure, but they stochastically dominatea sequence of pretty simple objects, call them ( S k ) k =0 , ,... . Each S k is a geometric randomvariable of parameter s k = sup z ≤ P τ ξ ... + ξk ωz ( X ≥ . (109)In fact, due to Lemma 3.15, we can imagine that for each n ≥ T k the random walk“attempts” to overjump ξ + ... + ξ k and manages to do so with a probability that isclearly smaller than s k . By Strassen’s Theorem, on an enlarged probability space withnew probability ˜ P ( ∞ ) , we can couple each S k with T k +1 − T k so that S k ≤ T k +1 − T k almost surely. Moreover, due to the strong Markov property of the random walk, all the S k ’s can be taken independent once we have fixed the parameters s k ’s. Now note thekey fact that, since the ξ · ’s are independent of the environment and that the GCD of thevalues attained with positive probability by the ξ · ’s is 1, the shifts ( τ ξ + ... + ξ k ω ) k ∈ N form astationary ergodic sequence under P ( ∞ ) . We refer to Appendix B for a proof of this fact(see Lemma B.1). This observation allows to prove that ( S j ) j ∈ N is a stationary ergodicsequence with respect to shifts under ˜ P ( ∞ ) (see Lemma B.3 in Appendix B). We now take ω ∈ Ω such that lim n →∞ X n = + ∞ P ω –a.s. (which holds for P –a.a. ω byTheorem 1–(i)). This implies that lim inf n →∞ X n n ≥ P ω –a.s.We can bound (see (1)) P ω (cid:16) lim sup n →∞ X n n > (cid:17) = P ( ∞ ) (cid:16) lim sup n →∞ X n n > (cid:12)(cid:12)(cid:12) ω (cid:17) ≤ P ( ∞ ) (cid:16) lim sup k →∞ X T k +1 T k > (cid:12)(cid:12)(cid:12) ω (cid:17) ≤ P ( ∞ ) (cid:16) lim sup k →∞ ξ + ... + ξ k +1 P k − j =0 ( T j +1 − T j ) > (cid:12)(cid:12)(cid:12) ω (cid:17) ≤ ˜ P ( ∞ ) (cid:16) lim sup k →∞ (cid:16) P k +1 i =1 ξ i k (cid:17)(cid:16) P k − j =0 S j k (cid:17) − > (cid:12)(cid:12)(cid:12) ω (cid:17) . Let us concentrate on the last line. The arithmetic mean of ξ , . . . ξ k +1 converges almostsurely to L + 1 /γ , the mean of ξ , by the law of large numbers. The arithmetic mean of S , . . . , S k − converges instead to E [ S ] because of the ergodic theorem (for simplicity, wewrite simply E for the expectation with respect to ˜ P ( ∞ ) ). Since E [ S ] = E [ E [ S | s ]] = E [ s ] = ∞ by assumption, we obtain that P ω (cid:0) lim sup n →∞ X n n > (cid:1) = 0 for almost all ω ∈ Ω. Taking into account that lim inf n →∞ X n n ≥ P ω –a.s., we get that lim n →∞ X n n = 0, P ω –a.s. (cid:3) Lemma 8.7. Condition (96) is equivalent to E (cid:2) e (1 − λ ) Z − (1+ λ ) Z − (cid:3) = ∞ . (110) Proof. We want to show that condition (110) implies (96). First of all, we claim that forall ω ∈ Ω and z ≤ P ω ( X ≥ ≥ e u min − u max ) P ωz ( X ≥ . (111)In fact, P ω ( X ≥ ≥ e ( u min − u max ) P j ≥ e − (1 − λ ) x j P j ≥ e − (1 − λ ) x j + P j ≤− e (1+ λ ) x j and P ωz ( X ≥ ≤ e ( u max − u min ) e (1 − λ ) x z P j ≥ e − (1 − λ ) x j P j ≥ z +1 e − (1 − λ )( x j − x z ) + P j ≤ z − e (1+ λ )( x j − x z ) . Hence, (111) is satisfied if P j ≥ e − (1 − λ ) x j P j ≥ e − (1 − λ ) x j + P j ≤− e (1+ λ ) x j ≥ e (1 − λ ) x z P j ≥ e − (1 − λ ) x j P j ≥ z +1 e − (1 − λ )( x j − x z ) + P j ≤ z − e (1+ λ )( x j − x z ) , which is true if and only ife − (1 − λ ) x z (cid:16) X j ≥ z +1 e − (1 − λ )( x j − x z ) + X j ≤ z − e (1+ λ )( x j − x z ) (cid:17) ≥ X j ≥ e − (1 − λ ) x j + X j ≤− e (1+ λ ) x j . Simplifying the expression (the terms with j ≥ X z +1 ≤ j ≤− e − (1 − λ ) x j + 1 + e − x z X j ≤ z − e (1+ λ ) x j ≥ X z +1 ≤ j ≤− e (1+ λ ) x j + e (1+ λ ) x z + X j ≤ z − e (1+ λ ) x j and the last inequality clearly holds since the l.h.s. terms dominate one by one ther.h.s. ones. D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 35 (111) shows that P ω ( X ≥ ≤ sup z ≤ P ωz ( X ≥ ≤ C · P ω ( X ≥ 1) for a constant C which does not depend on ω . On the other hand, using estimates (22) and (23), P ω ( X ≥ 1) = P j> c ,j P j =0 c ,j ≤ K · c , c , − = K ′ · e − (1 − λ ) Z +(1+ λ ) Z − P ω ( X ≥ ≥ K · c , c , − + c , = K ′ · e − (1 − λ ) Z e − (1+ λ ) Z − + e − (1 − λ ) Z for constants K , K ′ , K , K ′ which do not depend on ω .Hence, we have (96) ⇐⇒ E h P ω ( X λ ≥ i = ∞ ⇐⇒ E [e (1 − λ ) Z − (1+ λ ) Z − ] = ∞ . (cid:3) Corollary 8.8. Suppose that E [ Z − | Z ] ≤ C for some constant which does not dependon ω (e.g. if the ( Z i ) i ∈ Z are i.i.d.) and that E [e (1 − λ ) Z ] = ∞ . Then condition (110) issatisfied and in particular v X ∞ ( λ ) = 0 .Proof. Conditioning on Z and using Jensen’s inequality, we get E [e (1 − λ ) Z − (1+ λ ) Z − ] = E (cid:2) e (1 − λ ) Z E [e − (1+ λ ) Z − | Z ] (cid:3) ≥ E (cid:2) e (1 − λ ) Z e − (1+ λ ) E [ Z − | Z ] (cid:3) ≥ e − (1+ λ ) C E [e (1 − λ ) Z ] = ∞ . (cid:3) Appendix A. Proof of Proposition 5.3 By the tightness stated in Lemma 5.9, ( Q ρ ) ρ ∈ N + admits some limit point and any limitpoint Q ∞ is absolutely continuous to P , with Radon–Nikodym derivative d Q ∞ d P boundedby F from above and by γ from below.We now show that any limit point is an invariant distribution of the process given bythe environment viewed from the walker without truncation ( τ X ∞ n ω ) n ∈ N . To this end, let( Q ρ k ) k ≥ be a subsequence weakly converging to some probability Q ∞ on Ω. We take abounded continuous function f on Ω (without loss of generality we assume k f k ∞ ≤ 1) andwe write (cid:12)(cid:12) E ∞ [ f ( ω )] − E ∞ E ω, ∞ [ f ( τ X ω )] (cid:12)(cid:12) ≤ (cid:12)(cid:12) E ∞ [ f ( ω )] − E ρ k [ f ( ω )] (cid:12)(cid:12) + (cid:12)(cid:12) E ρ k E ω,ρ k [ f ( τ X ω )] − E ∞ E ω,ρ k [ f ( τ X ω )] (cid:12)(cid:12) + (cid:12)(cid:12) E ∞ E ω,ρ k [ f ( τ X ω )] − E ∞ E ω, ∞ [ f ( τ X ω )] (cid:12)(cid:12) =: B + B + B . (112)Above, E ∞ is the expectation with respect to the measure Q ∞ and in the second line wehave used the fact that E ρ k , the expectation with respect to the measure Q ρ k , is invariantfor the process ( τ X ρkn ω ) n ∈ N . The term B goes to zero as k → ∞ since Q ρ k → Q ∞ . Todeal with term B we observe that, by Lemma 3.8, for any δ > h such that,for any ρ ∈ N + ∪ {∞} , P ω,ρ ( | X | > h ) < δ, P -a.s. (113) Then, for ρ k ≥ h , we write B ≤ (cid:12)(cid:12)(cid:12) E ρ k h X | j |≤ h P ω,ρ k ( X = j ) f ( τ j ω ) i − E ∞ h X | j |≤ h P ω,ρ k ( X = j ) f ( τ j ω ) i(cid:12)(cid:12)(cid:12) + 2 δ ≤ (cid:12)(cid:12)(cid:12) E ρ k h X | j |≤ h P ω, ∞ ( X = j ) f ( τ j ω ) i − E ∞ h X | j |≤ h P ω, ∞ ( X = j ) f ( τ j ω ) i(cid:12)(cid:12)(cid:12) + E ρ k h P ω, ∞ ( | X | > h ) i + E ∞ h P ω, ∞ ( | X | > h ) i + 2 δ ≤ (cid:12)(cid:12)(cid:12) E ρ k h X | j |≤ h P ω, ∞ ( X = j ) f ( τ j ω ) i − E ∞ h X | j |≤ h P ω, ∞ ( X = j ) f ( τ j ω ) i(cid:12)(cid:12)(cid:12) + 4 δ Note that we have used (113) in the first and third estimates. For the second boundwe have used that h ≤ ρ k , P ω,ρ k ( X = j ) = P ω, ∞ ( X = j ) for 0 < | j | ≤ ρ k , while P ω,ρ k ( X = 0) = 1 − P j :0 < | j − x |≤ ρ k P ω, ∞ ( X = j ) and P ω, ∞ ( X = 0) = 0 (cf. (14)).By the continuity assumption on u and since k c ,k ( · ) k ∞ ≤ e − (1 − λ ) dk + u max , the mapΩ ∋ ω P ω, ∞ ( X = j ) = c ,j ( ω ) P i ∈ Z c ,i ( ω ) ∈ R + is continuous. Hence, using that Q ρ k converges to Q ∞ as k → ∞ , we can choose k large enough so that B ≤ δ . B is alsosmaller than δ for k big enough, again by (113). Altogether, letting ρ → ∞ , (112) impliesthat Q ∞ is invariant for ( τ X ∞ n ω ) n ∈ N with transition mechanism induced by P ω, ∞ .Having that Q ∞ ≪ P , the ergodicity of Q ∞ can be proved in the same way as Lemma5.1.It remains to prove uniqueness of the limit point. To this aim, take two limit points Q ∞ and Q ′∞ of ( Q ρ ) ρ ∈ N + . Recall that we write P ∞ Q ∞ and P ∞ Q ′∞ for the law on the pathspace Ω Z of the Markov chains ( τ X ∞ n ω ) n ∈ N , induced by P ω, ∞ , with initial distributions Q ∞ and Q ′∞ , respectively. As proved above, P ∞ Q ∞ and P ∞ Q ′∞ are stationary and ergodic withrespect to shifts. In particular, they must be either singular or the same. They cannot besingular, since Q ∞ and Q ′∞ are both mutually absolutely continuous with respect to P byLemma 5.9 and therefore absolutely continuous with respect to each other. Hence, P ∞ Q ∞ and P ∞ Q ′∞ are equal, and therefore Q ∞ = Q ′∞ . Appendix B. Ergodic issues In Lemmas B.1 and B.3 we prove the results we used in the proof of Proposition 8.1,see the discussion after equation (109). In Lemma B.4 we prove instead an assertion onassumption (A1) made in Subsection 2.1.For the first technical result, we slightly change the notation to make it lighter: TakeΩ := R Z , the space of two-sided sequences with real values, and let µ be a stationarymeasure on Ω, ergodic with respect to the usual shift τ for sequences. We indicate by ω an element in Ω. Let Ξ := N N and P be a probability measure on it. η = ( η i ) i ∈ N ∈ Ξ isan i.i.d. sequence of natural numbers under the measure P . We assume that the η i ’s areindependent of the ω ’s.On the space Ω × Ξ endowed with the product measure L = µ ⊗ P , we define thetransformation T : Ω × Ξ → Ω × Ξ, with T ( ω, η ) = ( τ η ω, τ η ). Lemma B.1. Assume that the greatest common divisor of { k : P ( η = k ) > } equals .Assume also (just for simplicity) that the η i ’s have finite expectation. Then, the transfor-mation T is ergodic. D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 37 Remark B.2. The statement is not true in general without the GCD condition. Indeed,take the very simple space with only two elements, ω = ( . . . , , , , , , , . . . ) and ω = τ ω , and take µ putting / probability to each of the two elements. Then µ is ergodicwith respect to τ . But, if we take η i ’s that can attain only even values, then the sequence ( τ η + ... + η j ω ) j ∈ N is not ergodic under L = µ × P .Proof. Take a function f = f ( ω, η ) which is invariant under T and bounded. We are goingto show that f is constant, L -almost surely, hence proving the claim.Assume we have, for two sequences η (1) , η (2) , n X k =1 η (1) k = n X k =1 η (2) k (114)for some n and η (1) k = η (2) k for k ≥ n . Then T n ( ω, η (1) ) = T n ( ω, η (2) ) and hence f ( ω, η (1) ) = f ( ω, η (2) ).We define F n as the σ –algebra generated by ω, η , . . . , η n . By the above observation weget E L (cid:2) f | F n (cid:3) ( ω, η (1) ) = E L (cid:2) f | F n (cid:3) ( ω, η (2) ) (115)if (114) holds true for some n (where E L denotes the expectation with respect to themeasure L ). On the other hand, f = lim n →∞ E L (cid:2) f | F n (cid:3) L –a.s. As a byproduct, weget that f ( ω, η (1) ) = f ( ω, η (2) ) for µ ⊗ P ⊗ P a.e. ( ω, η (1) , η (2) ) such that (114) happensfor infinitely many n (note that this event has probability one due to the Chung–FuchsTheorem [12] applied to the random walk Z n := P nj =1 ( η (1) j − η (2) j )). Hence, f ( ω, η (1) ) = f ( ω, η (2) ) µ ⊗ P ⊗ P –a.s. (116)We now claim that for µ –a.e. ω the function f ( ω, · ) is constant P –a.s. To this aim, it isenough to show that for µ –a.e. ω the P –variance of f ( ω, · ) is zero, and this follows from(116) and the identityVar P ( f ( ω, · )) = 12 Z dP ( η (1) ) Z dP ( η (2) ) h f ( ω, η (1) ) − f ( ω, η (2) ) i . Now let A ℓ,m := n η : m P i =1 η i = ℓ o . Since f is invariant under T , f ( ω, η ) = f ( τ ℓ ω, τ m η ) for η ∈ A ℓ,m . If P ( A ℓ,m ) > 0, we conclude that f ( ω, · ) = f ( τ ℓ ω, · ) P -almost surely, for µ –a.e. ω . Since the greatest common divisor of { k : P ( η = k ) > } equals 1, we conclude thatthere is some finite L such that f ( ω, · ) = f ( τ ℓ ω, · ) for all ℓ ≥ L , for µ –a.e. ω . Since thelaw of ω is ergodic with respect to τ , this implies easily that f ( · , · ) is constant L -almostsurely. (cid:3) Now recall the definition of the random sequence ( S k ) k ≥ introduced at the end of theproof of Prop. 8.1, and the notation therein. Lemma B.3. The random sequence ( S k ) k ∈ N is stationary and ergodic with respect toshifts.Proof. We first show that the sequence ( s k ) k ≥ (see (109)) is stationary and ergodic withrespect to shifts, under P ( ∞ ) . Indeed, writing (109) in a compact form as ( s k ) k ≥ = G ( ω, ( ξ k ) k ≥ ), it holds ( s k ) k ≥ = G ( τ ξ ω, ( ξ k ) k ≥ ). Then stationarity and ergodicity of( s k ) k ≥ under P ( ∞ ) follow from the stationarity and ergodicity of ( ω, ( ξ k ) k ≥ ) under P ( ∞ ) as in Lemma B.1. We move to ( S k ) k ≥ . Since ( s k ) k ≥ , under P ( ∞ ) , is stationary, one gets easily thestationarity of ( S k ) k ≥ under ˜ P ( ∞ ) . Take now a shift invariant Borel set A ⊂ N N (i.e. A = { ( x , x , . . . ) ∈ N N : ( x , x , . . . ) ∈ A } ). We claim that˜ P ( ∞ ) (cid:0) ( S , S , . . . ) ∈ A (cid:1) ∈ { , } . (117)We define f : N N → R as the Borel function such that f ( s , s , s , . . . ) = ˜ P ( ∞ ) (cid:0) ( S , S , . . . ) ∈ A | s , s , . . . (cid:1) . Since A is shift invariant, A belongs to the tail σ –algebra of N N . By Kolmogorov’s 0–1law and due to the independence of S , S , . . . under ˜ P ( ∞ ) ( ·| s , s , . . . ), we get that f hasvalues in { , } .Below, for the sake of intuition we condition to events of zero probability although all canbe formalized by means of regular conditional probabilities. Using that { ( S , S , . . . ) ∈ A } = { ( S , S , . . . ) ∈ A } due to the shift invariance of A and using the definition of( S k ) k ≥ , we get f ( a , a , . . . ) = ˜ P ( ∞ ) (( S , S , . . . ) ∈ A | s = a , s = a , . . . )= ˜ P ( ∞ ) (( S , S , . . . ) ∈ A | s = a , s = a , s = a . . . )= ˜ P ( ∞ ) (( S , S , . . . ) ∈ A | s = a , s = a , . . . ) = f ( a , a , . . . )Hence f is shift invariant. By the ergodicity of ( s k ) k ≥ , we conclude that the 0 / f ( s , s , . . . ) is constant P ( ∞ ) –a.s. An integration over ( s , s , . . . ) allows to get (117). (cid:3) Lemma B.4. Consider two independent random sequences ( Z k ) k ∈ Z and ( E k ) k ∈ Z , the for-mer stationary and ergodic with respect to shifts, the latter given by i.i.d. random variables.Then the random sequence ( Z k , E k ) k ∈ Z is stationary and ergodic with respect to shifts.Proof. Call P the law of (( Z k ) k ∈ Z , ( E k ) k ∈ Z ), which is a probability measure on the space R Z × R Z , whose generic element will be denoted by ( z, e ). We write T for the shift[ T ( z, e )] k = ( z k +1 , e k +1 ). Let A be a shift–invariant Borel subset of R Z × R Z . We want toshow that P ( A ) ∈ { , } .We first claim that, given r ≥ A is independent of any set B in the σ –algebragenerated by e i with | i | ≤ r . To this aim, given ε > 0, we fix a Borel set A n ⊂ R Z × R Z belonging to the σ -algebra generated by e i , z i with | i | ≤ n , and such that P ( A ∆ A n ) ≤ ε .We take m large enough so that [ − r, r ] ∩ [ − n + m, n + m ] = ∅ . We observe that P ( A ∩ B ) = P ( A n ∩ B ) + O ( ε ) , (118) P ( A ∩ B ) = P ( T m A ∩ B ) = P ( T m A n ∩ B ) + O ( ε ) = P ( T m A n ) P ( B ) + O ( ε ) . (119)Indeed, the first identity in (119) follows from the shift invariance of A , while the secondidentity follows from the shift stationarity of P implying that P ( T m A n ∆ T m A ) ≤ ε . Toget the third identity in (119) we observe that T m A n belongs to the σ –algebra generatedby e i , z i with i ∈ [ − n + m, n + m ]. By our choice of m and due to the properties of P , weget that T m A n and B are independent, thus implying the third identity.As a byproduct of (118) and (119) and the fact that P ( T m A n ) = P ( A ) + O ( ε ), we getthat P ( A ∩ B ) = P ( A ) P ( B ) + O ( ε ). By the arbitrariness of ε we conclude the proof ofour claim.Due to our claim, A = P ( A |F ), F being the σ –algebra generated by z i , i ∈ Z . Wecan think of P ( A |F ) as function of z ∈ R Z . Due to the shift invariance of A , P ( A |F ) isshift invariant in R Z except on an event of probability zero. Due to the ergodicity of the D MOTT VARIABLE RANGE HOPPING WITH EXTERNAL FIELD 39 marginal of P along z , we conclude that P ( A |F ) is constant a.s. Since A = P ( A |F ), A is constant a.s., hence P ( A ) ∈ { , } . (cid:3) Appendix C. The nearest neighbor random walk ( X ρn ) n ≥ , ρ = 1The biased Mott random walk ( Y t ) t ≥ can be compared to the nearest neighbor randomwalk obtained by considering only nearest neighbor jumps on { x j } j ∈ Z with probability ratefor a jump from x to y given by (3) when x, y are nearest neighbors. By the same argumentsas in Section 7, it is simple to show that this random walk is ballistic/subballistic if andonly if the same holds for ( X ρn ) n ∈ N , ρ = 1. The latter can be easily analyzed and thefollowing holds: Proposition C.1. The limit v X ( λ ) := lim n →∞ X n n exists P ω, –a.s. for P –a.a. ω , and itdoes not dependent on ω . Moreover, the velocity v X ( λ ) is positive if and only if condition (7) is fulfilled, otherwise it is zero.Proof. We apply Theorem 2.1.9 in [29] using the notations therein. Since ρ i = c i,i − /c i,i +1 we get that ¯ S = c , P ∞ i =0 ( c − i, − i − + c − i, − i +1 ). Therefore, E ( ¯ S ) < ∞ if and only if P ∞ i =0 E (cid:0) c − i, − i − /c , (cid:1) < ∞ . The last condition is equivalent to (7) since the energy marksare bounded. On the other hand ¯ F = c − , P ∞ i =1 ( c i,i − + c i,i +1 ). Hence, E ( ¯ F ) = ∞ if andonly if P ∞ i =0 E ( c i,i +1 /c − , ) = ∞ . Since, when u ≡ c i,i +1 /c − , = exp { (1 + λ ) Z − +2 λ ( Z + · · · + Z i − ) − (1 − λ ) Z i } , by Assumption (A4) it follows that E ( ¯ F ) = + ∞ always.The claim then follows since, by Theorem 2.1.9 in [29], v X ( λ ) > E ( ¯ S ) < ∞ , while v X ( λ ) = 0 if E ( ¯ S ) = ∞ and E ( ¯ F ) = ∞ . (cid:3) Acknowledgements . The authors are very much indepted to Noam Berger and thankhim for many useful and inspiring discussions. A.F. kindly acknowledges the Short VisitGrant of the European Science Foundation (ESF) in the framework of the ESF program“Random Geometry of Large Interacting Systems and Statistical Physics”, and the De-partment of Mathematics of the Technical University Munich for the kind hospitality.M.S. thanks the Department of Mathematics of University La Sapienza in Rome for thekind hospitality, the Department of Mathematics of the Technical University Munich wherehe has started to work on this project as a Post Doc, and the funding received from theEuropean Union’s Horizon 2020 research and innovation programme under the MarieSklodowska-Curie Grant agreement No 656047. References [1] S. Alili, Asymptotic behavior for random walks in random environments, J. Appl. Prob. 36,334–349 (1999).[2] V. Ambegoakar, B.I. Halperin, J.S. Langer. Hopping Conductivity in Disordered Systems . Phys,Rev B , 2612–2620 (1971).[3] N. Berger, N. Gantert and Y. Peres, The speed of biased random walk on percolation clusters ,Probab. Theory Relat. Fields , 221–242 (2003).[4] P. Billingsley. Convergence of probability measures . John Wiley & Sons, Inc. New York. 1999.[5] P. Caputo, A. Faggionato. Diffusivity of 1–dimensional generalized Mott variable range hopping .Ann. Appl. Probab. , 1459–1494 (2009).[6] P. Caputo, A. Faggionato, A. Gaudilli`ere. Recurrence and transience for a random walk on arandom point process . Electron. J. Probab. , 2580–2616 (2009).[7] P. Caputo, A. Faggionato, T. Prescott. Invariance principle for Mott variable range hoppingand other walks on point processes . Ann. Inst. H. Poincar´e Probab. Statist. [8] F. Comets, S. Popov. Ballistic regime for random walks in random environment with unboundedjumps and Knudsen billiards . Ann. Inst. H. Poincar´e Probab. Statist. , 721–744 (2012).[9] E. Cs´aki, M. Cs¨org¨o, On additive functionals of Markov chains . J. Theoret. Probab. 8, 905–919(1995).[10] D.J. Daley, D. Vere Jones. An Introduction to the Theory of Point Processes . New York,Springer, 1988[11] A. De Masi, P.A. Ferrari, S. Goldstein, W.D. Wick. An Invariance Principle for ReversibleMarkov Processes. Applications to Random Motions in Random Environments. J. Stat. Phys. , 787–855 (1989).[12] R. Durrett; Probability. Theory and examples. Mott law as upper bound for a random walk in a random environ-ment . Comm. Math. Phys. , 263–286 (2008).[14] A. Faggionato, H. Schulz–Baldes, D. Spehner, Mott law as lower bound for a random walk ina random environment. Comm. Math. Phys., , 21–64, 2006.[15] P. Franken, D. K¨onig, U. Arndt, V. Schmidt; Queues and Point Processes . John Wiley andSons, Chichester, 1982.[16] N. Gantert, P. Mathieu, and A. Piatnitski; Einstein relation for reversible diffusions in randomenvironment . Comm. Pure Appl. Math., 65: 187-228, 2012.[17] R. Lyons, Y. Peres. Probability on trees and networks . Version of 6th May 2016. Availableonline.[18] A. Miller, E. Abrahams. Impurity Conduction at Low Concentrations . Phys. Rev. , 745–755(1960).[19] N.F. Mott, E.A. Davis. Electronic Processes in Non-Crystaline Materials . Oxford UniversityPress, New York, 1979.[20] C. Kipnis, S.R.S. Varadhan. Central limit theorem for additive functionals of reversible Markovprocesses and applications to simple exclusion . Commun. Math. Phys. (1986).[21] M. Pollak, M. Ortu˜no, A. Frydman. The electron glass . Cambridge University Press, Cam-bridge, 2013.[22] M. Rosenblatt, Markov Processes. Structure and asymptotic behavior. Grundlehren der mathe-matischen Wissenschaften , Berlin, Springer, 1971.[23] L. Shen, Asymptotic properties of certain anisotropic walks in random media . Annal. Probab.12, 477–510, 2002.[24] B. Shklovskii, A.L. Efros, Electronic Properties of Doped Semiconductors . Springer, Berlin1984.[25] F. Solomon, Random walks in a random environment . Annal. Probab. 3, 1–31 (1975).[26] F. Spitzer, Principles of random walk . Second edition, Graduate Texts in Mathematics, Vol.34, Springer–Verlag, 1976.[27] H. Thorisson. Coupling, Stationarity, and Regeneration , Springer, Berlin (2000).[28] S.R.S. Varadhan, Probability Theory . Providence, American mathematical society, Providence,2001.[29] O. Zeitouni, Random walks in random environment , XXXI Summer School in Probability, St.Flour (2001). Lecture Notes in Math. 1837, Springer, 193–312, 2004.[30] A.–S. Sznitman, M.P.W. Zerner, A law of large numbers for random walks in random environ-ment . Ann. Probab. 27, 1851–1869 (1999). Alessandra Faggionato. Dipartimento di Matematica, Universit`a di Roma “La Sapienza”.P.le Aldo Moro 2, 00185 Roma, Italy E-mail address : [email protected] Nina Gantert. Fakult¨at f¨ur Mathematik, Technische Universit¨at M¨unchen. 85748 Garch-ing, Germany E-mail address : [email protected] Michele Salvi. Centre De Recherche en Math´ematiques de la D´ecision Universit´e Paris -Dauphine, Place du Mar´echal De Lattre De Tassigny, 75775 Paris, France E-mail address ::