MMIXING OF THE EXCLUSION PROCESS WITH SMALL BIAS
DAVID A. LEVIN AND YUVAL PERES
Abstract.
We analyze the mixing behavior of the biased exclusion processon a path of length n as the bias β n tends to 0 as n → ∞ . We show thatthe sequence of chains has a pre-cutoff, and interpolates between the unbiasedexclusion and the process with constant bias. As the bias increases, the mixingtime undergoes two phase transitions: one when β n is of order 1 /n , and theother when β n is order log n/n . Introduction
Suppose k particles are placed on vertices of the n -path, with no site multiplyoccupied. The biased exclusion process is the Markov chain ( X t ) t ≥ with transitionsas follows: • choose uniformly among the n − • if both vertices of the selected edge are either occupied or unoccupied, donothing, • if there is exactly one particle on the edge, place it on the right vertex withprobability p = (1 + β ) / q = (1 − β ) / n is even and k = n/
2. This defines a reversible ergodicMarkov chain, which has a unique stationary distribution π . It is natural to askabout its mixing time , t mix ( ε ) = min { t ≥ σ (cid:107) P σ ( X t ∈ · ) − π (cid:107) TV < ε } . We write t mix for t mix (1 / β = 0, Wilson (2004) proved1 π (1 + o (1)) n log n ≤ t mix ( ε ) ≤ π [1 + o (1)] n log( n/ε ) , and conjectured that the lower bound is sharp. Recently, Lacoin (2016) answeredthis, proving that the process has a cutoff , i.e.lim n →∞ t mix ( ε ) n log n → π . It is worth observing that the eigenfunction lower bound method introduced inWilson (2004) turns out to be widely applicable, giving sharp lower bounds formany models.When β >
0, the mixing time was first studied by Benjamini, Berger, Hoffman,and Mossel (2005), who proved t mix = O ( n ). A simpler path coupling proof wasgiven by Greenberg, Pascoe, and Randall (2009). (This proof is repeated here asthe upper bound in Theorem 9.) The purpose of this paper is to understand themixing behavior when the bias may depend on n and in particular when β n → a r X i v : . [ m a t h . P R ] A ug DAVID A. LEVIN AND YUVAL PERES as n → ∞ . We show that in all cases, there is a pre-cutoff , meaning that there areuniversal constants c < c so that c ≤ t mix (1 − ε ) t mix ( ε ) ≤ c . We find that, depending on the rate at which β →
0, the mixing time interpolatesbetween the unbiased and constant bias cases.Below summarizes our results.We write a n (cid:16) b n to mean that there exist constant 0 < c , c < ∞ , notdepending on β , so that c ≤ a n /b n ≤ c . Theorem 1.
Consider the β -biased exclusion process on { , , . . . , n } with k par-ticles. We assume that k/n → ρ ≤ / . (i) If nβ ≤ , then t mix (cid:16) n log n . (1)(ii) If ≤ nβ ≤ log n , then t mix (cid:16) n log nβ . (2)(iii) If nβ > log n , then t mix (cid:16) n β . (3)We provide more precise estimates on t mix ( ε ) in Proposition 6, Proposition 7,and Theorem 9. In particular, the lower bound in (1) follows from Proposition6, the lower bound in (2) follows from Proposition 7, and the lower bound in (3)follows from Proposition 11. The upper bounds in (2) and (3) follow from Theorem9, and the upper bound in (1) follows from Proposition 8.Since the behavior of the individual particles remains diffusive in the βn < βn = log n is a more unexpected transition.A path coupling gives useful upper bounds for β ≥ c/n . When βn is small, we usea simple coupling adapted from a coupling for (unbiased) random adjacent trans-positions given in Aldous (1983). In the unbiased case, k coupled unbiased randomwalks must hit zero. The bias introduced when βn is small doesn’t overwhelm thediffusive motion, so the same idea works.For lower bounds, when βn ≤ log n , we use Wilson’s method (introduced in Wil-son (2004)). Thus we need the eigenfunction corresponding to the second eigen-value, which we explicitly compute. When βn > log n , we follow the left-mostparticle, and show it needs at least order n /β moves to mix.The organization of the paper is as follows. After giving definitions in Section 2,in Section 3 we compute the eigenfunction needed for Wilson’s method, and providethe corresponding lower bounds. In particular, the lower bounds in Theorem 1 (i)and (ii) are given in Propositions 6 and 7, respectively.We give the two upper bounds in Section 4: The upper bound in (1) is given inProposition 8, and the other upper bounds in Theorem 1 are all immediate fromTheorem 9.We conclude with the single particle lower bound needed for Theorem 1 (iii) inSection 5. IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 3 Definitions
Path description.
It will sometime be convenient to use a bijection of thestate-space { , } n of the particle process to the space of nearest-neighbor pathsof length n which begin at 0 and have exactly k up increments and n − k down increments. For a particle configuration σ ∈ { , } n , let h : { , , . . . , n } → Z bedefined by h (0) = 0, and h ( j ) − h ( j −
1) = ( − − σ ( j ) , so occupied sites correspond to increments and vacant sites correspond to decre-ments of the path. See Figure 1 for an illustration. x y Figure 1.
The correspondence between particle representationand path representation for neighboring configurations x, y . Node2 of the path is updated in configuration x to obtain y . This cor-responds to exchanging the particle at vertex 2 with the hole atvertex 3.The dynamics on the path are as follows: pick among the n − q , and a local minimum with probability p . If the chosen vertex is notan extremum, do nothing. See again Figure 1 for an illustration of a transition,and Figure 2 for the possible transitions from a particular path.It will be convenient to move back and forth from the particle description andthe path description, and we will freely do so.3. Spectral Lower bounds
Here we set α = (cid:112) p/q ; our assumption is always that α > Proposition 2.
Let a ( α ) def = (1 + α k − n ) / (1 + α − n ) . The function Φ , defined forthe path h as Φ( h ) def = n − (cid:88) x =1 (cid:16) α h ( x ) − α − x a ( α ) (cid:17) sin( πx/n ) , (4) DAVID A. LEVIN AND YUVAL PERES
Figure 2.
The possible transitions from a given configuration. is the second eigenfunction for the biased exclusion process, with eigenvalue − − √ pq cos( π/n ) n − . We let θ = q/p ; note our convention is θ <
1. For a path h and vertex 0 ≤ i ≤ n ,let f h ( i ) = (cid:88) ≤ j ≤ i { h ( j ) − h ( j −
1) = 1 } be the number of up-edges before i . We have f h (0) = 0 and f h ( n ) = k .Define g (cid:63)h ( i ) = θ i − f h ( i ) for i = 0 , , . . . , n . Lemma 3.
Let ˜ h ( i ) be the path obtained by applying an update to h at internalvertex i . Then E h [ g (cid:63) ˜ h ( i ) ( i )] = qg (cid:63)h ( i − pg (cid:63)h ( i + 1) . (5) Proof.
Consider the case where i is a local extremum in h . If the path at i isrefreshed to a local maximum, then f ˜ h ( i ) ( i ) = f h ( i −
1) + 1, while if the path isrefreshed to a local minimum, then f ˜ h ( i ) ( i ) = f h ( i + 1) −
1. Therefore, E h [ g (cid:63) ˜ h ( i ) ( i )] = qθ i − ( f h ( i − + pθ i − ( f h ( i +1) − = qg (cid:63)h ( i −
1) + pg (cid:63)h ( i + 1) . In the case where h ( i − < h ( i ) < h ( i + 1), the update at i must leave the pathunchanged. In this case, f h ( i −
1) = f h ( i ) − f h ( i + 1) = f h ( i ) + 1. Therefore, qg (cid:63)h ( i −
1) + pg (cid:63)h ( i + 1) = qθ i − − ( f h ( i ) − + pθ i +1 − ( f h ( i )+1) = g (cid:63)h ( i ) = E h [ g (cid:63) ˜ h ( i ) ] . IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 5
Finally, suppose h ( i − > h ( i ) > h ( i + 1); again, the update at i does notchange the path. Since f h ( i −
1) = f h ( i ) = f h ( i + 1) in this case, qg (cid:63)h ( i −
1) + pg (cid:63)h ( i + 1) = qθ ( i − − f h ( i ) + pθ ( i +1) − f h ( i ) = ( qθ − + pθ ) g (cid:63)h ( i ) = g (cid:63)h ( i ) . (cid:3) For any constant c , the function g h ( i ) = g (cid:63)h ( i ) − c also satisfies E h [ g ˜ h ( i ) ( i )] = qg h ( i −
1) + pg h ( i + 1) . Define a ( θ ) = 1 + θ n/ − k θ n/ = 1 + α k − n α − n , and let c ( n, k, θ ) = 1 + θ n/ − k θ − n/ = θ n/ (cid:16) θ n/ − k θ n/ (cid:17) = a ( θ ) θ n/ . Define g h ( i ) = g (cid:63)h ( i ) − c ( n, k, θ ) . Proof of Proposition 2.
Let φ : { , , . . . , n } → R satisfy φ (0) = 0 , φ ( n ) = 0 λφ ( x ) = ( pφ ( x −
1) + qφ ( x + 1)) x = 1 , . . . , n − . That is, φ is the eigenfunction for the q ↑ , p ↓ random walk on { , , . . . , n } withabsorbing states 0 and n . A direct verification shows that φ ( x ) = θ − x/ sin( πx/n ) , λ = 2 √ pq cos( π/n )is a solution. Note that g h (0) φ (1) q + g h ( n ) φ ( n − p = [1 − c ] θ − / q sin( π/n ) (6)+ [ θ n − k − c ] θ − n/ θ / p sin( π − π/n )= √ pq sin( π/n )[1 + θ n/ − k − c [1 + θ − n/ ]]= 0 . Define Φ( h ) = n − (cid:88) x =1 g h ( x ) φ ( x ) . (7)Let ˜ h be the configuration obtained after one step of the chain when startedfrom h ; as before let ˜ h ( x ) be the update given that internal vertex x is selected foran update. E h [Φ(˜ h )] = n − (cid:88) x =1 E h [ g ˜ h ( x )] φ ( x )= n − (cid:88) x =1 (cid:104)(cid:16) − n − (cid:17) g h ( x ) + 1 n − E h [ g ˜ h ( x ) ] (cid:105) φ ( x )= (cid:16) − n − (cid:17) Φ( h ) + 1 n − n − (cid:88) x =1 [ qg h ( x −
1) + pg h ( x + 1)] φ ( x ) DAVID A. LEVIN AND YUVAL PERES
The sum on the right equals n − (cid:88) x =1 g h ( x )[ qφ ( x + 1) + pφ ( x − g h (0) φ (1) q + g h ( n ) φ ( n − p ]= λ n − (cid:88) x =1 g h ( x ) φ ( x ) = λ Φ( h ) , by (6). Therefore, E h [Φ(˜ h )] = (cid:16) − − λn − (cid:17) Φ( h )Note that φ ( x ) > x = 1 , . . . , n −
1, and g h is increasing in h , so Φ is increasing.An increasing eigenfunction always corresponds to the second eigenvalue, so it mustbe the one with largest (non unity) eigenvalue. The second largest eigenvalue equals1 − − √ pq cos( π/n ) n − . Note that h ( x ) = 2 f h ( x ) − x , so we haveΦ( h ) = n − (cid:88) x =1 g h ( x ) φ ( x )= n − (cid:88) x =1 (cid:104) θ x − f h ( x ) − c ( n, k, θ ) (cid:105) θ − x/ sin( πx/n )= n − (cid:88) x =1 (cid:104) α h ( x ) − θ ( n − x ) / θ n/ − k θ n/ (cid:105) sin( πx/n )= n − (cid:88) x =1 α h ( x ) sin( πx/n ) − ξ ( n, k, α ) . Let Ψ( h ) def = n − (cid:88) x =1 α h ( x ) sin( πx/n ) . Since ξ ( n, k, α ) does not depend on h , and the eigenfunction Φ must be orthogonalto the constants, it follows that ξ ( n, k, α ) = E π (Ψ). Since sin( π ( n − x ) /n ) =sin( πx/n ), E π Ψ = a ( θ ) n − (cid:88) x =1 θ ( n − x ) / sin( πx/n ) = a ( θ ) n − (cid:88) x =1 α − x sin( πx/n ) . (cid:3) To apply Wilson’s Lower Bound, we need to bound max h Φ( h ) from below, and R := | (Φ(˜ h ) − Φ( h )) | from above. Define h ( x ) = (cid:40) x x ≤ k k − x k < x ≤ n . (8) IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 7
Lemma 4.
For h defined in (8) , Φ( h ) = k (cid:88) x =1 α x (1 − α − x ) a ( α ) sin( πx/n )+ n/ (cid:88) x = k +1 α x (cid:16) ( α k − α − x + α − n )1 + α − n (cid:17) sin( πx/n ) . (9) Proof.
Using that sin( πx/n ) = sin( π ( n − x ) /n ), we pair together the terms at x and n − x in (4) so thatΦ( h ) = k (cid:88) x =1 (cid:0) α x + α k − n + x − a ( α )( α − x + α x − n ) (cid:1) sin( xπ/n )+ n/ (cid:88) x = k (cid:0) α k − x + α k − n + x − a ( α )( α − x + α x − n ) (cid:1) sin( xπ/n ) . The first sum simplifies to k (cid:88) x =1 α x (1 − α − x ) (cid:16) α k − n α − n (cid:17) sin( πx/n ) , and the second to n/ (cid:88) x = k +1 α x (cid:16) ( α k − α − x + α − n )1 + α − n (cid:17) sin( πx/n ) . (cid:3) Lemma 5.
Let h be as in (8) , and for a path h , let ˜ h be one step of the exclusionchain started from h . Let γ = 1 − λ be the spectral gap. Define R def = max h | Φ(˜ h ) − Φ( h ) | . If < nβ ≤ log n , then log (cid:16) γ Φ( h ) R (cid:17) ≥ [1 + o (1)] log n . Proof.
Fix b < k . From (9),Φ( h ) ≥ sin( πb/n )2 k (cid:88) x = b α x (1 − α − x )= sin( πb/n )2 α k ( α − α − ( k − b ) )(1 − α − ( b + k ) ) α − . (10)If ˜ h is obtained by a single update to h at x , the | ˜ h ( x ) − h ( x ) | ≤
2, and | α h ( x ) − α ˜ h ( x ) | ≤ α k log( α ) . Thus, if R = max h | Φ(˜ h ) − Φ( h ) | , then √ R ≤ α k ( α − . (11) DAVID A. LEVIN AND YUVAL PERES
Letting b = k/ b/n → ρ/
2, equations (10) and (11) show thatΦ( h ) R ≥ c (cid:20) ( α − α − k/ )(1 − α − k/ )( α − (cid:21) . (12)The spectral gap 1 − λ = γ satisfies γ = 1 − √ pq cos( π/n ) n − β / O ( β ) + π n + O ( n − ) n − . (13)Suppose that n − ≤ β ≤ log nn . Then from (12) and (13) we havelog (cid:16) γ Φ( h ) R (cid:17) ≥ log (cid:16) c n log n (cid:17) = [1 + o (1)] log n . If nβ → ζ , where 0 ≤ ζ ≤
1, thenlim inf n →∞ γ Φ( h ) n R ≥ c (cid:104) (1 − e − ζρ/ )(1 − e − ζρ/ ) ζ (cid:105) ζ > c (cid:16) ρ (cid:17) ζ = 0 . The right-hand side is bounded below for 0 ≤ ζ ≤
1, so we conclude thatlog (cid:16) γ Φ( h ) R (cid:17) ≥ [1 + o (1)] log n . (cid:3) Proposition 6. If nβ → ζ where ≤ ζ , then t mix ( ε ) ≥ n π + ζ [1 + o (1)] (cid:16) log n + log[(1 − ε ) /ε )] (cid:17) . (14) Proof.
From (13), the spectral gap 1 − λ = γ satisfies γ = π + ζ n [1 + o (1)] . Using Lemma 5 in Wilson (2004) (see also Theorem 13.5 of Levin, Peres, andWilmer (2009) for a discussion) yields t mix ( ε ) ≥
12 log(1 /λ ) (cid:20) log (cid:18) (1 − λ )Φ( x ) R (cid:19) + log((1 − ε ) /ε ) (cid:21) (15)= n ( π + ζ ) [1 + o (1)] (cid:16) log n + log[(1 − ε ) /ε ] (cid:17) , which yields (14). Note that this matches the lower bound in Theorem 4 of Wilson(2004) for the symmetric exclusion when lim n βn = 0. (cid:3) Proposition 7. If nβ → ∞ but nβ ≤ log n , then t mix ( ε ) ≥ nβ [1 + o (1)](log n + log[(1 − ε ) /ε ]) . Proof.
This again follows from (13), (15) and Lemma 5. (cid:3)
IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 9 Upper Bounds
Nearly unbiased.Proposition 8.
There exists a constant c such that if nβ ≤ , then t mix ( ε ) ≤ c n log n . Proof.
We now define a Markov chain ( σ t , η t ) so that • σ t and η t are labelled k -particle configurations, • if the labels are erased, ( σ t ) and ( η t ) each are biased exclusion processes.We say a labelled particle is coupled at time t if it occupies the same vertex inboth σ t and η t .We now describe a move of this chain from state ( σ, η ): Pick an edge e amongthe n − • Both σ and η have no particles on e . The chain remains at ( σ, η ). • One of σ, η contains two particles on e , and one of σ, η contains one particleon e . Suppose, without loss of generality, that σ contains one particle on e .Toss a p -coin to determine where the particle is placed in σ . If the singleparticle on e in σ is coupled, or has the same label as one of the particleson e in η , arrange the two particles on e in η to preserve or facilitate thecoupling. Otherwise, toss a fair coin to determine the placement of the twoparticles in η . • Both σ and η have two particles on e . Toss a fair coin to determine theplacement of the two particles on e in σ . Place the particles in η on e topreserve or facilitate any couplings; if no coupling is possible, toss a faircoin to determine the particle placement on e .The distance D i ( t ) between particle i in σ and particle i in η performs a delayednearest-neighbor walk, with possible bias β at each move (sometimes the bias is tothe right, sometimes to the left). The probability it moves is at least 1 / ( n − S t ) with constant upward bias β so that D i ( t ) ≤ S t until D i ( t ) hits zero.Consider the biased random walk ( S t ) on Z with positive bias β , holding prob-ability 1 − n − , and S = n ; if τ = min { t ≥ S t = 0 } , and τ i = min { t ≥ D i ( t ) = 0 } , then P ( τ i > u ) ≤ P ( τ > u ) . We have P ( τ ≤ t ) ≥ P n ( S t ≤
0) = P (cid:32) Z t ≤ − n − tβ/ ( n − (cid:112) tpq/ ( n − (cid:33) where Z t = S t − E n ( S t )Var( S t ) . By the Central Limit Theorem, since βn ≤
1, there is aconstant c > n large enough, P n ( S n ≤ ≥ c . Thus by taking c large enough, P n ( τ > c n ) ≤ (1 − c ) c < . If we run 2 log n blocks of c n moves, then we have P ( τ i > c n log n ) ≤ n . Setting τ couple def = min { t ≥ σ t = η t } , P (cid:0) τ couple > c n log n (cid:1) ≤ k (cid:88) i =1 P ( τ i > c n log n ) < n . If d ( t ) = sup h (cid:107) P t ( h, · ) − π (cid:107) TV , then d (2 c n log n ) ≤ n , and t mix ( ε ) ≤ c n log n for n large enough. (cid:3) x h = 2 y xy h = -2 Figure 3.
Neighboring configurations x and y .4.2. Path coupling.
We consider configurations x and y to be adjacent if y can beobtained from x by taking a particle and moving it to an adjacent unoccupied site.In the path representation, moving a particle to the right corresponds to changinga local maximum (i.e., an “up-down”) to a local minimum (i.e. a “down-up”).Moving a particle to the left changes a local minimum to a local maximum. SeeFigure 1, where v = 3. IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 11
Theorem 9.
Consider the biased exclusion process with bias β = β n = 2 p n − > on the segment of length n and with k particles. Set α = (cid:112) p n / (1 − p n ) . For ε > ,if n is large enough, then t mix ( ε ) ≤ nβ (cid:20) log(1 /ε ) + log (cid:20) α ( α k − α n − k − α − (cid:21)(cid:21) . In particular, if β ≤ const . < , then α = 1 + β + O ( β ) , so t mix ( ε ) ≤ nβ (cid:104) log( ε − ) + n [ β + O ( β )] − β + O ( β ) (cid:105) . Remark . Note that whenever c (log n ) /n < β < c < c and c ,the ratio of the upper and lower bounds is bounded. Thus there is a pre cut-off forthis chain in this regime. Proof.
For α = (cid:112) p/q >
1, define the distance between two configurations x and y which differ by a single transition to be (cid:96) ( x, y ) = α n − k + h , where h is the height of the midpoint of the diamond that is removed or added.(See Figure 3.) Note that α > h ≥ − ( n − k ) guarantee that (cid:96) ( x, y ) ≥
1, sowe can use path coupling – see, e.g., Theorem 14.6 of Levin, Peres, and Wilmer(2009). We again let ρ denote the path metric on X corresponding to (cid:96) .We couple from a pair of initial configurations x and y which differ at a singlevertex v as follows: choose the same vertex in both configurations, and proposea local maximum with probability 1 − p and a local minimum with probability p .For both x and y , if the current vertex v is a local extremum, refresh it with theproposed extremum; otherwise, remain at the current state.Let ( X , Y ) be the state after one step of this coupling. There are several casesto consider.The first case is shown in Figure 3. Let x be the upper configuration, and y the lower. Here the edge between v − v − v + 1 and v + 2 is “down”, in both x and y . If v is selected, the distance decreasesby α n − k + h . If either v − v + 1 is selected, and a local minimum is selected,then the lower configuration y is changed, while the upper configuration x remainsunchanged. Thus the distance increases by α n − k + h − in that case. We concludethat E x,y [ ρ ( X , Y )] − ρ ( x, y ) = − n − α h + n − k + 2 n − pα h + n − k − = α h + n − k n − (cid:18) pα − (cid:19) = α h + n − k n − √ pq − . (16)In the case where x and y at v − , v − , v, v + 1 , v + 2 are as in the right panel ofFigure 3, we obtain E x,y [ ρ ( X , Y )] − ρ ( x, y ) = − n − α h + n − k + 2 n − − p ) α h + n +1 = α h + n − k n − α (1 − p ) −
1) = α h + n − k n − √ pq −
1) (17)(We create an additional disagreement at height h +1 if either v − v +1 is selectedand a local maximum is proposed; the top configuration can accept the proposal, while the bottom one rejects it.) Since p > /
2, we have δ def = 1 − √ pq >
0, andboth (16) and (17) reduce to E x,y [ ρ ( X , Y )] − ρ ( x, y ) = − α h + n − k n − δ . (18)Now consider the case on the left of Figure 4. We have E x,y [ ρ ( X , Y )] − ρ ( x, y ) = − n − α h + n − k + 1 n − qα h + n − k +1 + 1 n − pα h + n − k − = α h + n − k n − (cid:16) qα + pα − (cid:17) = − α h + n − k n − δ , which gives again the same expected decrease as (18). (In this case, a local maxproposed at v − v + 1 will be accepted only by the bottom configuration.) The case onthe right of Figure 4 is the same.Thus, (18) holds in all cases. That is, since ρ ( x, y ) = (cid:96) ( x, y ) = α h + n − k , E x,y [ ρ ( X , Y )] = ρ ( x, y ) (cid:18) − δn − (cid:19) ≤ ρ ( x, y ) e − δn − . The diameter of the state-space is the distance from the configuration with k “up” edges followed by n − k “down” edges to the configuration with n − k “downedges” followed by k “up” edges. To move from the former to the latter, first flipthe top-most maxima, next the subsequent two maxima, continuing down k − j , there are j maxima to flip. Each of the next n − k + 1 levels willhave k maxima to flip. The number of maxima in the last k − k − (cid:88) j =1 jα n − k + k − j + n − k (cid:88) j = k kα n − k + k − j + n − (cid:88) j = n − k +1 ( n − j ) α n − k + k − j = α ( α k − α n − k − α − Since δ ≥ β /
2, Corollary 14.7 of Levin, Peres, and Wilmer (2009) gives t mix ( ε ) ≤ nβ (cid:20) log(1 /ε ) + log (cid:20) α ( α k − α n − k − α − (cid:21)(cid:21) . Note that α = 1 + β + O ( β ) as β →
0, so t mix ( ε ) ≤ nβ (cid:2) log( ε − ) + n [ β + O ( β )] − β + O ( β ) (cid:3) . In particular, if β = n , then t mix ( ε ) = O ( n log n ), which is the same order as themixing time in the symmetric case. (cid:3) IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 13 xy xy Figure 4.
More neighboring configurations.5.
Lower bound via a single particle
Proposition 11.
Suppose that nβ → ∞ . For any ε > and δ > , if n is largeenough, then t mix ( ε ) ≥ (1 − δ ) n β . Proof.
We use the particle description here. The stationary distribution is given by π ( x ) = 1 Z k (cid:89) i =1 (cid:18) pq (cid:19) z i ( x ) = 1 Z ( p/q ) (cid:80) ki =1 z i ( x ) , where ( z ( x ) , . . . , z k ( x )) are the locations of the k particles in the configuration x ,and Z is a normalizing constant. To see this, if x (cid:48) is obtained from x by moving aparticle from j to j + 1, then π ( x ) P ( x, x (cid:48) ) π ( x (cid:48) ) P ( x (cid:48) , x ) = 1( p/q ) n − p n − q = 1 . Let L ( x ) be the location of the left-most particle of the configuration x , and let R ( x ) be the location of the right-most unoccupied site of the configuration x .Let X j,(cid:96) = { x : L ( x ) = j, R ( x ) = (cid:96) } , and consider the transformation T : X j,(cid:96) → X which takes the particle at j andmoves it to (cid:96) . Note that T is one-to-one on X j,(cid:96) .We have π ( X j,(cid:96) ) (cid:18) pq (cid:19) (cid:96) − j ≤ (cid:88) x ∈X j,(cid:96) π ( T ( x )) ≤ , so π ( X j,(cid:96) ) ≤ α − (cid:96) − j ) . Letting G = { x : L ( x ) ≤ (1 / − b ) n } , we have π ( G ) ≤ (cid:88) j ≤ (1 / − b ) n, (cid:96) ≥ n/ π ( X j,(cid:96) ) ≤ n α − bn . We consider now starting from a configuration x with L ( x ) = bn/ L t ), can be coupled with a delayedbiased nearest-neighbor walk ( S t ) on Z , with S = bn/ L t ≤ S t , aslong as S t >
1. The holding probability for ( S t ) equals 1 − n − . By the gambler’sruin, the chance S t ever reaches 1 is bounded above by( q/p ) bn/ ≤ e − βbn . Therefore. P x { L t > (1 / − b ) n } ≤ e − βbn + P bn/ { S t > (1 / − b ) n } . (19)By Chebyshev’s Inequality (recalling S = bn/ P {| S t − bn/ − βt/ ( n − | > M } ≤ Var( S t ) M ≤ tM ( n − . Taking t n = (1 − b )( n − n/ β and M = bn/ P bn/ { S t n > (1 / − b ) n } ≤ − b ) b βn → , as long as βn → ∞ . Combining with (19) shows that P { L t n > (1 / − b ) n } ≤ e − bβn + o (1) . We conclude that as long as βn → ∞ , d ( t n ) ≥ P x { X t n ∈ G } − π ( G ) ≥ − o (1)as n → ∞ , whence t mix ( ε ) ≥ (1 − b )( n − n β for sufficiently large n . (cid:3) Acknowledgements
We thank Perla Sousi and Nayantara Bhatnagar for helpful comments on anearlier version of this paper.
References
Aldous, D. 1983.
Random walks on finite groups and rapidly mixing Markov chains , Seminar onprobability, XVII, Lecture Notes in Math., vol. 986, Springer, Berlin, pp. 243–297. ↑ Mixing times of the biased cardshuffling and the asymmetric exclusion process , Trans. Amer. Math. Soc. , no. 8, 3013–3029(electronic). ↑ Sampling biased lattice configurations usingexponential metrics , ACM-SIAM Symposium on Discrete Algorithms (New York, New York,2009), pp. 76-85. ↑ Mixing time and cutoff for the adjacent transposition shuffle and the simpleexclusion , Ann. Probab. , no. 2, 1426–1487. ↑ Markov chains and mixing times , AmericanMathematical Society, Providence, RI. With a chapter by James G. Propp and David B. Wilson. ↑
8, 11, 12
IXING OF THE EXCLUSION PROCESS WITH SMALL BIAS 15
Wilson, D. B. 2004.
Mixing times of Lozenge tiling and card shuffling Markov chains , Ann. Appl.Probab. , no. 1, 274–325. ↑
1, 2, 81, 2, 8