aa r X i v : . [ m a t h . P R ] O c t Bounds for left and right window cutoffs
Dedicated to the memory of Béatrice Lachaud
Javiera Barrera ∗ Bernard Ycart † February 8, 2018
Abstract
The location and width of the time window in which a sequence of processes convergesto equilibrum are given under conditions of exponential convergence. The location de-pends on the side: the left-window and right-window cutoffs may have different locations.Bounds on the distance to equilibrium are given for both sides. Examples prove that thebounds are tight.Keywords: cutoff; exponential ergodicityMSC: 60J25
The term “cutoff” was introduced by Aldous and Diaconis [1], to describe the phenomenonof abrupt convergence of shuffling Markov chains. Many families of stochastic processeshave since been shown to have similar properties: see [13, Chap. 8] for an introduction tothe subject, [16] for a review of random walk models in which the phenomenon occurs, and[4] for an overview of the theory. Consider a sequence of stochastic processes in continuoustime, each converging to a stationary distribution. Denote by d n ( t ) the distance betweenthe distribution at time t of the n -th process and its stationary distribution, the ‘distance’having one of the usual definitions (total variation, separation, Hellinger, relative entropy, L p , etc.). The phenomenon can be expressed at three increasingly sharp levels (moreprecise definitions will be given in section 2).1. The sequence has a cutoff at ( t n ) if d n ( ct n ) tends to the maximum M of the distanceif c <
1, to 0 if c > t n , w n ) if lim inf d n ( t n + cw n ) tends to M as c tends to −∞ , and lim sup d n ( t n + cw n ) tends to 0 as c tends to + ∞ .3. The sequence has a profile cutoff at ( t n , w n ) with profile F if F ( c ) = lim d n ( t n + cw n )exists for all c , and F tends to M at −∞ , to 0 at + ∞ .There are essentially two ways to interpret the cutoff time t n : as a mixing time [13, Chap.18], or as a hitting time [14]. For samples of Markov chains, the latter interpretation canbe used to determine explicit online stopping times for MCMC algorithms [18, 11, 12, 9].Sequences of processes for which an explicit profile can be determined are scarce. Thefirst example of a window cutoff concerned the random walk on the hypercube for the ∗ Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez. Av. Diagonal las Torres 2640 Peñalolén,Santiago, Chile. [email protected] † Laboratoire Jean Kuntzmann, Univ. Grenoble-Alpes, 51 rue des Mathématiques 38041 Grenoble cedex 9,France.
[email protected] otal variation distance; it was treated by Diaconis and Shahshahani shortly after theintroduction of the notion [8]. It was soon precised into a profile cutoff by Diaconis,Graham, and Morrison [6]. Cutoffs for random walks on more general products or sumsof graphs have been investigated in [19], and more recently by Miller and Peres [15].Random walks on the hypercube can be interpreted as samples of binary Markov chains.Diaconis et al.’s results were generalized to samples of continuous and discrete time finitestate Markov chains for the chi-squared and total variation distance in [17], then tosamples of more general processes, for four different distances in [2, section 5] (see also[13, Chap. 20]). Other examples of profile cutoffs include the riffle shuffle for the totalvariation distance [3], and birth and death chains for the separation distance [7] or thetotal variation distance [10]. When the maximum M of the distance is 1 (total variation,separation), the profile F decreases from 1 to 0. Thus it can be seen as the survivalfunction of some probability distribution on the real line. A Gaussian distribution hasbeen found for the riffle shuffle with the total variation distance [3, Theorem 2] or forsome birth and death chain with the separation distance [7, Theorem 6.1]. A Gumbeldistribution has been found for samples of finite Markov chains and the total variationdistance [6, 17]. For the Hellinger, chi-squared, or relative entropy distances, other profileswere obtained in [2].Explicit profiles are usually out of reach, in particular for the total variation distance:only a window cutoff can be hoped for. However the definition above, which is usuallyagreed upon ([4, Definition 2.1] or [13, p. 218]), may not capture the variety of all possiblesituations. As will be shown here, the location of a left-window cutoff should be distin-guished from that of a right-window cutoff: see Figure 18.2, p. 256 of [13]. The mainresult of this note, Theorem 2.1, expresses the characteristics of the left and right win-dows in terms of a decomposition into exponentials of the distances d n ( t ). It refines someof the results in Chen and Saloff-Coste [5], in particular Theorem 3.8. Explicit boundson the distance to equilibrium are given. They are proved to be tight, using examples ofcutoffs for Ornstein-Uhlenbeck processes (see Lachaud [11]).The paper is organized as follows. Section 2 contains formal definitions and state-ments. Examples are given in section 3. Theorem 2.1 is proved in section 4. For each positive integer n a stochastic process X n = { X n ( t ) ; t > } is given. Weassume that X n ( t ) converges in distribution to ν n as t tends to infinity. The convergenceis measured by one of the usual distances (total variation, separation, Hellinger, relativeentropy, L p , etc.), the maximum of which is denoted by M ( M = 1 for total variationand separation, M = + ∞ for relative entropy, chi-squared. . . ). The distance between thedistribution of X n ( t ) and ν n is denoted by d n ( t ). Definition 2.1.
Denote by ( t n ) and ( w n ) two sequences of positive reals, such that w n = o ( t n ) . They will be referred to respectively as location and width . The sequence ( X n ) has:1. a left-window cutoff at ( t n , w n ) if: lim c →−∞ lim inf n →∞ inf t 2. a right-window cutoff at ( t n , w n ) if: lim c → + ∞ lim sup n →∞ sup t>t n + cw n d n ( t ) = 0 ; 3. a profile cutoff at ( t n , w n ) with profile F if: ∀ c ∈ R , F ( c ) = lim n →∞ d n ( t n + cw n ) xists and satisfies: ∀ c ∈ R , < F ( c ) < M and lim c →−∞ F ( c ) = M , lim c → + ∞ F ( c ) = 0 . If both left- and right-window cutoffs hold for the same location t n and width w n ,then a ( t n , w n )-cutoff holds in the sense of Definition 2.1 in Chen and Saloff-Coste [4].The location and width are not uniquely determined. Observe that if a left-window cutoffholds at location t n , it also holds at any location t ′ n such that t ′ n t n . Symmetrically,if a right-window cutoff holds at location t n , it also holds at any location t ′ n such that t ′ n > t n . Moreover, if a cutoff holds for width w n , it also holds for any width w ′ n suchthat w ′ n > w n . The location and width of a left-window cutoff will be said to be optimalif for any c < 0: lim inf n →∞ inf t 1, etc. The resultis expressed for a sequence of continuous time processes, it could be written in discretetime, at the expense of heavier notations. Theorem 2.1. Assume that for each n , there exist an increasing sequence of positivereals ( ρ i,n ) , and a sequence of non negative reals ( a i,n ) with a ,n > , such that: d n ( t ) = + ∞ X i =1 a i,n e − ρ i,n t . (1) Denote by A i,n the cumulated sums of ( a i,n ) , truncated to values no smaller than . A i,n = max { , a ,n + · · · + a i,n } . For each n , define: t n = sup i log( A i,n ) ρ i,n , (2) w n = 1 ρ ,n , (3) r n = w n (log( ρ ,n t n ) − log(log( ρ ,n t n ))) . (4) Assume that:1. for n large enough, < t n < + ∞ , (5) lim n →∞ ρ ,n t n = + ∞ , (6) 3. there exists a positive real α such that for n large enough, and for all i > , a i,n αA i − ,n . (7) hen ( X n ) has a left-window cutoff at ( t n , w n ) , a right-window cutoff at ( t n + r n , w n ) .More precisely: ∀ c < , lim inf n →∞ d n ( t n + cw n ) > e − c , (8) ∀ c > , lim sup n →∞ d n ( t n + r n + cw n ) e − c . (9)Conditions (5) and (7) are technical. Condition (6) is known as Peres criterion: Chenand Saloff-Coste [4] have proved that it implies cutoff for L p distances with p > L distance. A consequence is that w n = o ( t n ) asrequested by Definition 2.1, and more precisely that w n = o ( r n ) and r n = o ( t n ).A decomposition into exponentials of the distance to equilibrium such as (1) holds formany processes: functions of finite state space Markov chains, functions of exponentiallyergodic Markov processes, etc. Assuming that the decomposition only has non-negativeterms is a stronger requirement: see [5, section 4]. It implies that d n ( t ) is a decreasingfunction of t . We do not view it as a limitation. Indeed, if (1) has negative terms, it canbe decomposed as d n ( t ) = d + n ( t ) − d − n ( t ), with: d + n ( t ) = + ∞ X i =1 max { a i,n , } e − ρ i,n t and d − n ( t ) = − + ∞ X i =1 min { a i,n , } e − ρ i,n t . Assume that Theorem 2.1 applies to both d + n ( t ) and d − n ( t ), leading to left-window cutoffsat ( t + n , w + n ) and ( t − n , w − n ), right-window cutoffs at ( t + n + r + n , w + n ) and ( t − n + r − n , w − n ). Since d n ( t ) is nonnegative, t − n t + n , t − n + r − n t + n + r + n , and w − n < w + n . The sequence ( X n ) hasa right-window cutoff, and (9) holds for d n with ( t n + r n , w n ) = ( t + n + r + n , w + n ). Moreover,if t − n + r − n = o ( t + n ) then the sequence ( X n ) has a left-window cutoff, and (8) holds for d n with ( t n , w n ) = ( t + n , w + n ).Theorem 3.8 in [5] contains a less tight assertion: it describes a ( t n , r n )-cutoff, whichcan be deduced from Theorem 2.1 above. However, it hides the fact that when there is a(two-sided) window cutoff, the optimal width is no larger than w n thus strictly smallerthan r n . The latter quantity is a correction bound on the location rather than a width:the optimal location may be anywhere between t n and t n + r n .In the next section, sequences of processes having a profile cutoff at ( t n , w n ) or ( t n + r n , w n ), with profile F ( c ) = e − c will be constructed, thus proving that (8) and (9) aretight. Several examples from the existing literature could be written as particular cases ofTheorem 2.1: reversible Markov chains for the L distance [17, 5], n -tuples of independentprocesses for the relative entropy distance [2], random walks on sums or products of graphs[19], samples of Ornstein-Uhlenbeck processes [11]. The objective of this section is notan extensive review of possible applications, but rather the explicit construction of somesequences illustrating the tightness of (8) and (9), and the possible locations of windowcutoffs. We shall use here the relative entropy distance, also called Kullback-Leiblerdivergence: if µ and ν are two probability measures with densities f and g with respectto λ , then: d ( µ, ν ) = Z S µ f log( f /g ) d λ , where S µ denotes the support of µ . The main advantage of choosing that distance is itssimplicity for dealing with tensor products: d ( µ ⊗ µ , ν ⊗ ν ) = d ( µ , ν ) + d ( µ , ν ) . et a and ρ be two positive reals. Our building block will be a one-dimensional Ornstein-Uhlenbeck process, denoted by X a,ρ (see Lachaud [11] on cutoff for samples of Ornstein-Uhlenbeck processes). The process X a,ρ is a solution of the equation:d X ( t ) = − ρ X ( t ) d t + √ ρ d W ( t ) , where W is the standard Brownian motion. The distribution of X a,ρ (0) is normal withexpectation √ a and variance 1. It can be easily checked that the distribution of X a,ρ ( t )is normal with expectation √ a e − ρt/ and variance 1. Therefore the (relative entropy)distance to equilibrium is: d ( t ) = a e − ρt . Consider now two sequences ( a n ) and ( ρ n ) of positive reals, and assume that ( a n ) tendsto infinity. Theorem 2.1 applies to the sequence of processes ( X a n ,ρ n ) with a ,n = a n , ρ ,n = ρ n , and a i,n = 0 for i > 1. The location and width are: t n = log( a n ) ρ n and w n = 1 ρ n . The sequence has a profile cutoff at ( t n , w n ) with profile F ( c ) = e − c . Indeed: d n ( t n + cw n ) = a n e − ( ρ n t n + c ) = e − c . Hence (8) is tight. For ρ n ≡ ρ , X a n ,ρ is a Markov process with a fixed semigroup, and anincreasingly remote starting point: cutoff for such sequences were studied in [14].Using tuples of independent Ornstein-Uhlenbeck processes, one can construct se-quences X n for which the distance to equilibrium is any finite sum of exponentials. Let m n be an integer. For i = 1 , . . . , m n , let a i,n and ρ i,n be two positive reals. Define theprocess X n as: X n = (cid:0) X a ,n ,ρ ,n , . . . , X a mn,n ,ρ mn,n (cid:1) , where the coordinates are independent, each being an Ornstein-Uhlenbeck process asdefined above. The distance to equilibrium of X n is: d n ( t ) = m n X i =1 a i,n e − ρ i,n t . (10)Let n be an integer larger than 1. Let β n be a real such that 0 β n 1. Define: a ,n = e n , ρ ,n = n β n n log (cid:16) n log( n ) (cid:17) , (11)and for i = 2 , . . . , m n = 9 n , a i,n = e − n , ρ i,n = log(e n + ( i − − n ) . (12)The following notation is introduced for clarity: ℓ n = log (cid:18) n log( n ) (cid:19) . Using (2), (3), and (4), one gets: t n = 1 + ℓ n β n n = nρ ,n , w n = t n n , r n = t n ℓ n n = ℓ n w n . (13) emma 3.1. Let d n be defined by (10), with a i,n and ρ i,n given by (11) and (12).Assume the following limit (possibly equal to + ∞ ) exists: γ = lim n →∞ (1 − β n ) ℓ n . (14) Then: ∀ c ∈ R , lim n →∞ d n ( t n + (1 − β n ) r n + cw n ) = e − c (1 + e − γ ) . (15)A few particular cases are listed below. They illustrate the variety of possible behav-iors.• β ≡ 1: a cutoff with profile 2e − c occurs at ( t n , w n ).• β n ≡ β ∈ [0 , − c occurs at ( t n + (1 − β ) r n , w n ). For β = 0,this proves that (9) is tight.• β n = (1 + ( − n ) / 2: a left-window cutoff occurs at ( t n , w n ), a right-window cutoffat ( t n + r n , w n ). The locations and width are optimal.• β n = 1 − γ/ℓ n , with γ > 0: a cutoff with profile e − c (1 + e γ ) occurs at ( t n , w n ).• β n = 1 − (2 + ( − n ) /ℓ n : a ( t n , w n )-cutoff occurs, t n and w n are optimal. Yet novalue of c is such that d n ( t n + cw n ) converges: there is no profile. Proof. The main step is the following limit.lim n →∞ d n (cid:18) ℓ n n + cn (cid:19) = e − c (1 + e − γ ) . (16)In the sum defining d n , let us isolate the first term: d n (cid:0) ℓ n n + cn (cid:1) = D + D , with D = a ,n exp (cid:18) − ρ ,n (cid:18) ℓ n n + cn (cid:19)(cid:19) and D = m n X i =2 a i,n exp (cid:18) − ρ i,n (cid:18) ℓ n n + cn (cid:19)(cid:19) . The first term is: D = exp (cid:18) − (1 − β n ) ℓ n + ct n (cid:19) . Its limit is e − ( γ + c ) because (1 − β n ) ℓ n tends to γ and t n tends to 1. The second term is: D = + ∞ X i =2 e − n (cid:0) e n + ( i − − n (cid:1) − ( ℓnn + cn ) . Thus D is a Riemann sum for the decreasing function x x − ( ℓnn + cn ). Therefore, Z e n + m n e − n e n +e − n x − ( ℓnn + cn ) d x < D < Z e n +( m n − − n e n x − ( ℓnn + cn ) d x . (17)Now: (e n ) − ( ℓnn + cn ) ℓ n n + cn = e − c log( n ) ℓ n + c , which tends to e − c . Moreover,(e n + ( m n − − n ) − ( ℓnn + cn ) ℓ n n + cn nℓ n + c m /nn e ! − ( ℓ n + c ) , hich tends to 0 for m n = 9 n > e n . So the upper bound in (17) tends to e − c . Thereremains to prove that the difference between the two integrals tends to 0. That differenceis smaller than: Z e n +e − n e n x − ( ℓnn + cn ) d x = (e n ) − ( ℓnn + cn ) ℓ n n + cn ! (cid:16) − (1 + e − n ) − ( ℓnn + cn ) (cid:17) . We have seen that the first factor tends to e − c . The second factor tends to 0, hence theresult.Let us now deduce (15) from (16). Using (13),1 + ℓ n n + cn = t n + (1 − β n ) r n t n + c w n t n . Hence: lim n →∞ d n (cid:18) t n + (1 − β n ) r n t n + c w n t n (cid:19) = e − c (1 + e − γ ) . (18)Let us write: t n + (1 − β n ) r n t n + c w n t n = t n + (1 − β n ) r n + cw n − ((1 − β n ) r n + cw n ) (cid:18) ℓ n β n nt n (cid:19) . Therefore:0 d n (cid:18) t n + (1 − β n ) r n t n + c w n t n (cid:19) − d n ( t n + (1 − β n ) r n + cw n ) (cid:18) exp (cid:18) ρ ,n (cid:18) ((1 − β n ) r n + cw n ) ℓ n β n n (cid:19) − (cid:19)(cid:19) d n ( t n + (1 − β n ) r n + cw n )= (cid:18) exp (cid:18) ℓ n (1 − β n ) β n + cℓ n β n nt n (cid:19) − (cid:19) d n ( t n + (1 − β n ) r n + cw n ) . Hence the difference tends to 0, since ℓ n n tends to 0. Proofs of inequalities (8) and (9) are given below. Proof of (8). Let c be a negative real. Fix ǫ such that 0 < ǫ < − c . Using (2), define i ∗ n as: i ∗ n = min (cid:26) i , t n − ǫw n log( A i,n ) ρ i,n t n (cid:27) . (19)From (6), t n + cw n is positive for n large enough. Then: d n ( t n + cw n ) = + ∞ X i =1 a i,n exp( − ρ i,n ( t n + cw n )) > i ∗ n X i =1 a i,n exp( − ρ i,n ( t n + cw n )) > A i ∗ n ,n exp( − ρ i ∗ n ,n ( t n + cw n )) > exp(( − ǫw n − cw n ) ρ i ∗ n ) > exp(( − ǫw n − cw n ) ρ ,n )= e − c − ǫ . Since the inequality holds for all ǫ > 0, the result follows. roof of (9). Let c be a positive real. Our goal is to prove the following inequality. d n ( t n + r n + cw n ) e − ( r n + cw n ) ρ ,n t n r n + cw n (cid:18) r n + cw n t n + e C n (cid:19) , (20)where C n tends to 0 as n tends to infinity. Let us first check that (20) implies (9). Observethat r n + cw n t n tends to 0. Using (3) and (4):e − ( r n + cw n ) ρ ,n t n r n + cw n = e − c − log(log( t n ρ ,n ))+ c log( t n ρ ,n ) . By (6) the right-hand side tends to e − c , hence the result.To prove (20), split the sum defining d n ( t n + r n + cw n ) into two parts S and S , with: S = l X i =1 a i,n exp( − ρ i,n ( t n + r n + cw n )) and S = + ∞ X i = l +1 a i,n exp( − ρ i,n ( t n + r n + cw n )) . Using the fact that the ρ i,n are increasing, S A l,n exp( − ρ ,n ( t n + r n + cw n )) . (21)To bound S , the idea is the same as in the proof of (15). From (2), exp( − ρ i,n t n ) A − i,n .Therefore: S + ∞ X l +1 a i,n A − (1+( r n + cw n ) /t n ) i,n . (22)The function x x − (1+( r n + cw n )) /t n is decreasing, and its integral from l to + ∞ converges.The right-hand side of (22) is a Riemann sum for that integral. Therefore: S t n r n + cw n A − ( r n + cw n ) /t n l,n . (23)Consider first the particular case t n = log( A ,n ) ρ ,n , or equivalently A ,n = exp( t n ρ ,n ).Applying (21) and (23) for l = 1 yields: d n ( t n + r n + cw n ) e − ( r n + cw n ) ρ ,n t n r n + cw n (cid:18) r n + cw n t n + 1 (cid:19) , (24)which is (20) for C n = 0. Otherwise, A ,n < exp( t n ρ ,n ). Let ǫ be such that 0 < ǫ < ( t n ρ ,n − log( A ,n )) /w n . The index i ∗ n defined by (19) is larger than 1. The set of integers l such that A l,n < e ρ ,n t n , contains 1 and is bounded by i ∗ n . Therefore, there exists l n > A l n − ,n < e ρ ,n t n A l n ,n . (25)Applying (21) and (23) to l = l n − d n ( t n + r n + cw n ) e − ( r n + cw n ) ρ ,n + t n r n + cw n exp (cid:18) − r n + cw n t n log A l n − ,n (cid:19) = e − ( r n + cw n ) ρ ,n + t n r n + cw n exp (cid:18) − ( r n + cw n ) ρ ,n log A l n − ,n ρ ,n t n (cid:19) = e − ( r n + cw n ) ρ ,n t n r n + cw n (cid:18) r n + cw n t n + e C n (cid:19) . (26)with C n = ( r n + cw n ) ρ ,n (cid:18) − log A l n − ,n ρ ,n t n (cid:19) . (27) e must prove that C n tends to 0. By (3) and (4):( r n + cw n ) ρ ,n = log( ρ ,n t n ) − log log( ρ ,n t n ) + c . (28)From (25): 0 < − log( A l n − ,n ) ρ ,n t n ρ ,n t n log (cid:18) a l n ,n A l n − ,n (cid:19) . (29)Plugging (28) and (29) into (27), for n large enough:0 < C n (cid:18) log( ρ ,n t n ) − log log( ρ ,n t n ) + cρ ,n t n (cid:19) log (cid:18) a l n ,n A l n − ,n (cid:19) . By (6), the first factor of the right-hand side tends to 0. Moreover, condition (7) entailsthat for n large enough: log (cid:18) a l n ,n A l n − ,n (cid:19) < log(1 + α ) . Hence the result. References [1] D. Aldous and P. Diaconis, Shuffling cards and stopping times , Amer. Math. Monthly (1986), no. 5, 333–348.[2] J. Barrera, B. Lachaud, and B. Ycart, Cutoff for n-tuples of exponentially convergingprocess , Stochastic Process. Appl. (2006), no. 10, 1433–1446.[3] D. Bayer and P. Diaconis, Trailing the dovetail shuffle to its lair , Ann. Appl. Probab. (1992), no. 2, 294–313.[4] G. Y. Chen and L. Saloff-Coste, The cutoff phenomenon for ergodic Markov pro-cesses , Electron. J. Probab. (2008), no. 3, 26–78.[5] , The L -cutoff for reversible Markov processes. , J. Funct. Anal. (2010),no. 7, 2246–2315.[6] P. Diaconis, R. Graham, and J. Morrison, Asymptotic analysis of a random walk ona hypercube with many dimensions , Random Struct. Algor. (1990), no. 1, 51–72.[7] P. Diaconis and L. Saloff-Coste, Separation cut-offs for birth and death chains , Ann.Appl. Probab. (2006), no. 4, 2098–2122.[8] P. Diaconis and M. Shahshahani, Time to reach stationarity in the Bernoulli-Laplacediffusion model , SIAM J. Math. Anal. (1987), no. 1, 208–218.[9] A. Diédhiou and P. Ngom, Cutoff time based on generalized divergence measure ,Statist. Probab. Lett. (2009), no. 10, 1343–1350.[10] J. Ding, E. Lubetzky, and Y. Peres, Total-variation cutoff in birth-and-death chains ,Probab. Theory Rel. Fields (2010), no. 1-2, 61–85.[11] B. Lachaud, Cutoff and hitting times for a sample of Ornstein-Uhlenbeck processesand its average , J. Appl. Probab. (2005), no. 4, 1069–1080.[12] B. Lachaud and B. Ycart, Convergence times for parallel Markov chains. , Positivesystems. Proceedings of the second multidisciplinary international symposium onpositive systems: Theory and applications (POSTA 06), Grenoble, France, August30 – September 1, 2006, Springer, Berlin, 2006, pp. 169–176.[13] D. A. Levin, Y. Peres, and E. L. Wilmer, Markov chains and mixing times , AmericanMathematical Society, 2006. 14] S. Martínez and B. Ycart, Decay rates and cutoff for convergence and hitting timesof Markov chains with countably infinite state space , Adv. Appl. Probab. (2001),no. 1, 188–205.[15] J. Miller and Y. Peres, Uniformity of the uncovered set of random walk and cutofffor lamplighter chains , Ann. Probab. (2012), no. 2, 535–577.[16] L. Saloff-Coste, Random walks on finite groups , Probability on discrete structures,Encyclopaedia Math. Sci., vol. 110, Springer, Berlin, 2004, pp. 263–346.[17] B. Ycart, Cutoff for samples of Markov chains. , ESAIM: P&S (1999), 89–106.[18] , Stopping tests for Markov chain Monte-Carlo methods. , Methodol. Comput.Appl. Probab. (2000), no. 1, 23–36.[19] , Cutoff for large sums of graphs , Ann. Inst. Fourier (2007), no. 7, 2197–2208. Acknowledgements: J. Barrera was partially supported by grants Anillo ACT88,Fondecyt n o1100618, and Basal project CMM (Universidad de Chile). B. Ycart wassupported by Laboratoire d’Excellence TOUCAN (Toulouse Cancer).