aa r X i v : . [ m a t h . C O ] D ec Hamilton cycles in weighted Erd˝os-R´enyi graphs
Tony Johansson ∗ Stockholm UniversityStockholm, Sweden
December 23, 2020
Abstract
Given a symmetric n × n matrix P with 0 ≤ P ( u, v ) ≤
1, we define arandom graph G n,P on [ n ] by independently including any edge { u, v } with probability P ( u, v ). For k ≥ A k be the property of containing ⌊ k/ ⌋ Hamilton cycles, and one perfect matching if k is odd, all edge-disjoint. With an eigenvalue condition on P , and conditions on its rowsums, G n,P ∈ A k happens with high probability if and only if G n,P hasminimum degree k whp. We also provide a hitting time version. Asa special case, the random graph process on pseudorandom ( n, d, µ )-graphs with µ ≤ d ( d/n ) α for some constant α > A k assoon as it acquires minimum degree k with high probability. The problem of determining whether a random graph contains a Hamiltoncycle, i.e. a cycle passing through every vertex exactly once, dates backto the inception of the study of random graphs. In 1960, Erd˝os and R´enyiasked whether their eponymous random graph G n,p , obtained by includingany edge independently with probability p , contains a Hamilton path [8].The problem was settled by Koml´os and Szemer´edi [15], who showed thatlim Pr { G n,p is Hamiltonian } = lim Pr { δ ( G n,p ) ≥ } , where δ denotes the minimum degree of a graph. This was strengthened to ahitting time result, independently by Bollob´as [5] and by Ajtai, Koml´os andSzemer´edi [1], stated as follows. Suppose G n,m is an increasing sequence of ∗ Funded by the Swedish Research Council (grant 2015-05015). G n,m is obtained by adding a uniformly chosen edgeto G n,m − . Let τ be the smallest m for which δ ( G n,m ) ≥
2. Then withhigh probability, G n,τ is Hamiltonian. For integers k ≥
1, let A k denote thegraph property of containing ⌊ k/ ⌋ Hamilton cycle, as well as a matching ofsize ⌊ n/ ⌋ when k is odd. Bollob´as and Frieze [6] strengthened the hittingtime result above to showing that G n,τ k ∈ A k with high probability. For amore thorough history, see Frieze’s recent survey on Hamilton cycles [10].In recent years, some attention has been turned to random subgraphsof a host graph Γ n . The random subgraph Γ n,p is obtained by includingany edge of Γ n independently with probability p , and a the random graphprocess Γ n,m on Γ n is obtained by ordering the edges of Γ n uniformly atrandom. An early example is the random bipartite graph G n,n,p , obtainedby letting Γ n = K n,n . Frieze [9] determined the threshold in this case. Friezeand Krivelevich [13] showed that if Γ n is in a certain class of pseudorandomgraphs (specified below) then Γ n,τ ∈ A whp. The author [14] showedthat the same holds when δ (Γ n ) ≥ (1 / ε ) n for some constant ε > δ (Γ n ) = n/ n,τ k ∈ A k whp for any k = O (1) in three dense classesof host graphs, which include some pseudorandom graphs and graphs with δ (Γ n ) ≥ (1 / ε ) n .Both [13] and [3] consider pseudorandom graphs known as ( n, d, µ )-graphs. A graph Γ is an ( n, d, µ )-graph if it has n vertices, every vertexhas degree d , and the second largest eigenvalue of its adjacency matrix is atmost µ in absolute value. We strengthen both results in the following specialcase of our main theorem. Let τ A k be the smallest m for which Γ n,m ∈ A k . Theorem (Theorem 1.2, pseudorandom graph case) . Let k = O (1) . Sup-pose Γ n is an ( n, d, µ ) -graph with µ ≤ d ( d/n ) α for some constant α > .Then the random graph process on Γ n has τ A k = τ k with high probability. In [13] it was required that µ = o ( d / / ( n ln n ) / ), which only holds if d ≫ n / (ln n ) , while [3] asked that µ = O ( d /n ) and d = Ω( n (ln ln n ) / ln n ).This result strengthens both, with an implicit degree bound of d = n Ω(1) owing to the fact that µ = Ω( d / ) (see e.g. [18]).Our full result concerns a more general inhomogeneous random graph.Suppose P is a symmetric n × n matrix with entries P ( u, v ) ∈ [0 , G n,P by independently including each edge { u, v } with probability P ( u, v ). If R is a symmetric n × n matrix with R ( u, v ) ≥ uv , we also define a random graph process G n,R ( t ) as follows. Eachpair { u, v } is independently assigned a random value E ( u, v ), exponentially2istributed with rate R ( u, v ), taken to equal ∞ if R ( u, v ) = 0. We let G n,R ( t ) = ([ n ] , { uv : E ( u, v ) ≤ t } ) . Note that G n,R ( t ) equals G n,P in distribution when P ( u, v ) = 1 − e − R ( u,v ) t .Note that this framework generalizes Γ n,p and Γ n,m (now in continuoustime), re-obtained by letting R be the adjacency matrix of Γ n . Anastos,Frieze and Gao [4] considered Hamiltonicity in the stochastic block model,which is G n,P with P in a specific class of block matrices.For vertex sets A, B we let R ( A, B ) = P u ∈ A,v ∈ B R ( u, v ). Let d R ( u ) = R ( u, V ) and d R ( A ) = R ( A, V ). A key tool in our proof is the randomwalk induced by R , which jumps from u to v with probability M ( u, v ) = R ( u, v ) /d R ( u ). The transition matrix M has real eigenvalues 1 = λ ≥ λ ≥ · · · ≥ λ n ≥ − λ ( R ) = max {| λ | , | λ n |} . Let σ ( u ) = d R ( u ) /d R ( V ) be the stationary distribution of the random walk, anddefine k M k = max u,v M ( u, v ). Definition 1.1.
Let RM be the set of rate matrices R with transition matrix M such that there exist constants α ∈ [0 , / , γ > , b > such that λ ( R ) = o (1) and λ ( R ) ≤ ( n k M k ) − α − γ , and R satisfies the following:(a) there exists a d = d ( n ) such that d R ( u ) ≥ d for all u , and d R ( V ) ≤ bdn .(b) for any A ⊆ V , σ ( A ) = d R ( A ) d R ( V ) ≤ b (cid:18) | A | n (cid:19) − α , (1.1) (c) k R k ≤ dn − γ . We state our main result.
Theorem 1.2.
Let k = O (1) . If R ∈ RM , then whp G n,R ( t ) satisfies τ k = τ A k . For any symmetric non-negative matrix R on V , let γ k ( R ) = X u ∈ V d R ( u ) k − e − d R ( u ) . Let RM(1) be the set of R ∈ RM with γ ( R ) = 1. Note that for any x > G n,R ( τ k ) and G n,xR ( τ k ) are equal in distribution, so it is enoughto prove Theorem 1.2 for R ∈ RM(1).3 heorem 1.3.
Let k = O (1) . Suppose P ∈ RM has γ k ( P ) → γ k ∈ [0 , ∞ ] .Then lim n →∞ Pr { G n,P ∈ A k } = e − γ k . Note that if P has constant row sums d P ( u ) = d = ln n + ( k −
1) ln ln n + c n,k , then γ k = exp {− lim n c n,k } . Let us first discuss the overarching proof idea. Traditionally, many proofsof Hamiltonicity in random graphs rely on finding so-called booster edges for a graph G , which are edge e / ∈ G such that G ∪ { e } is closer to beingHamiltonian, typically meaning it contains a longer path than G does. Ifrandom edges are added to G , one argues that some booster is likely to beadded.Montgomery [17] more generally defined boosters as sets T of edgeswhose addition gets G closer to Hamiltonicity, and used sets | T | ≤
2. Asimilar idea was used earlier in [11]. Booster pairs were also used by Alonand Krivelevich in their recent paper [3]. In this paper we move to generalboosters, i.e. edge sets T of any (constant) size whose addition improves G .These are found using random alternating walks. For proving Hamiltonicity, the most important property of G n,R ( τ k ) is ex-pansion (see Lemma 2.1 below for the definition). A major drawback of G n,R ( τ k ) for our purposes is its high average degree, with most vertices hav-ing degree Ω(ln n ). We therefore define a random subgraph H ⊆ G n,R ( τ k ),which retains expansion and connectivity with high probability while con-taining only O ( n ) edges. We do not show that H itself is Hamiltonian, butthe graph will be important to our proof.Recall the construction of G n,R ( t ), in which each edge { u, v } is includedat a random time E ( u, v ). Let D ≥ k be a constant integer and let T D ( u ) = inf { t > d t ( u ) ≥ D } be the random time at which u attains degree D . Define H ( t ) ⊆ G n,R ( t ) byincluding any edge { u, v } with E ( u, v ) ≤ t and E ( u, v ) ≤ max { T D ( u ) , T D ( v ) } .Let H = H ( τ k ). In other words, an edge is included in H if it is among thefirst D edges attached to one of its endpoints in G n,R ( τ k ).4or a graph G = ( V, E ) and A ⊆ V we let N ( A ) = { u / ∈ A : { u, v } ∈ E, some v } and b N ( A ) = A ∪ N ( A ). The following lemma is proved inSection 8. Lemma 2.1.
Suppose R ∈ RM , and let k = O (1) . For a large enoughconstant D , the following holds.(i) (Light tail) Let θ tend to infinity with n and let S θ be the set of u with d H ( u ) ≥ θ or d R ( u ) ≥ θd . Then with high probability, | b N H ( S θ ) | = o ( n ) .(ii) (Expansion) There exists a constant β = β ( R, k ) > such that withhigh probability, every vertex set A with | A | < βn satisfies | N H ( A ) | ≥ k | A | . Let SE k ( R ) (“sparse expanders”) be the set of graphs satisfying (i) –(ii). Note that if G ∈ SE k and ∆( F ) ≤ ℓ < k then G \ F ∈ SE k − ℓ and G ∪ F ∈ SE k . Note also that SE k ⊆ SE ℓ .For t > S ( t ) = { u : T D ( u ) > t } = { u : d t ( u ) < D } , (2.1)and L ( t ) = V \ S ( t ). For 0 ≤ t ≤ t define a random graph G ∗ n,R ( t , t ) byincluding any edge { u, v } with E ( u, v ) ≤ t , and any { u, v } with one end-point in S ( t ) and E ( u, v ) ≤ τ k . Then G n,R ( t ) ⊆ G ∗ n,R ( t , t ) ⊆ G n,R ( τ k )whenever t ≤ τ k , and H ⊆ G ∗ n,R ( t , t ) for any 0 < t ≤ t . Suppose t < t < t and define G = G ∗ n,R ( t , t ) , G = G ∗ n,R ( t , t ) , G = G ∗ n,R ( t , t ) . For i = 0 , , F i denote the σ -algebra generated by G ∗ n,R ( t , t ) (including E ( u, v ) for all edges included) and G ∗ n,R ( t , t i ) (excluding E ( u, v )). Then H is F i -measurable for all i , and the following lemma lets us jump between G and G . Lemma 2.2.
Suppose R ∈ RM(1) . Let < t < t < t and G i = G ∗ n,R ( t , t i ) for i = 0 , , . Let L ∈ F . Then for any set F of edgesPr { F ⊆ G | F } = Y { u,v }∈ F \ G u,v ∈ L ( t ) (cid:16) − e − R ( u,v )( t − t ) (cid:17) , (2.2) Pr { F ⊆ G | { F ⊆ G } ∩ L } ≥ (cid:18) t − t t − t (cid:19) | F | . roof. Any edge in G \ G is fully contained in L ( t ). Conditional on G and S ( t ), the edges { u, v } / ∈ G with u, v ∈ L ( t ) are independent exponentialrandom variables with individual rates R ( u, v ), conditioned to be at least t . Then (2.2) follows from the memoryless property of exponential randomvariables. Since k R k = o (1), we havePr { F ⊆ G | { F ⊆ G } ∩ L } = Y { u,v }∈ F \ G e − R ( u,v ) t − e − R ( u,v ) t e − R ( u,v ) t − e − R ( u,v ) t = (cid:18) t − t t − t (cid:19) | F \ G | ≥ (cid:18) t − t t − t (cid:19) | F | . Suppose R ∈ RM(1) and let t = 1 / , t = 1 / , t = 3 /
4. Define G , G , G as in the previous section. We will show that G ∈ A k whp. In Section 3.1we show that τ k > / G ⊆ G n,R ( τ k ) when τ k > /
4, itfollows thatPr { G n,R ( τ k ) ∈ A k } ≥ Pr { G ∈ A k and τ k > / } = 1 − o (1) . (2.3)Say that F is a k -graph if it can be written as the disjoint union of F , . . . , F ⌊ k/ ⌋ where F i is a path or a cycle for all i , as well as a matching F when k is odd. Define s k ( G ) = max {| F | : F ⊆ G a k -graph } , so that G ∈ A k if and only if s k ( G ) = ⌊ kn/ ⌋ .For i = 1 , M ℓi be the event that s k ( G i ) = ℓ . Recall the definitionof S ( t ) from (2.1), and let L = { H ∈ SE k } ∩ {| S ( t ) | = o ( n ) } , and note that L is F -measurable. Then M ℓi ∩ L ∈ F i for i = 1 ,
2, andLemma 2.2 givesPr n M ℓ | M ℓ ∩ L o ≥ min F k -graph Pr { F ⊆ G | L ∩ { F ⊆ G }} ≥ (cid:18) (cid:19) kn/ . Then for any ℓ < ⌊ kn/ ⌋ ,Pr n M ℓ ∩ L o = Pr (cid:8) M ℓ ∩ M ℓ ∩ L (cid:9) Pr (cid:8) M ℓ | M ℓ ∩ L (cid:9) ≤ Pr (cid:8) M ℓ | M ℓ ∩ L (cid:9) − kn/ . (2.4)6uppose we are able to prove that for any ℓ < ⌊ kn/ ⌋ ,Pr n M ℓ (cid:12)(cid:12)(cid:12) M ℓ ∩ L o ≤ e − Ω( n √ ln n ) . (2.5)Then plugging (2.5) into (2.4) givesPr { G / ∈ A k } ≤ Pr (cid:8) L (cid:9) + X ℓ< ⌊ kn/ ⌋ Pr n M ℓ ∩ L o ≤ Pr (cid:8) L (cid:9) + kne − Ω( n √ ln n )+ O ( n ) = Pr (cid:8) L (cid:9) + o (1) . In Lemma 3.1 we will show that | S ( t ) | = o ( n ) whp. Together with Lemma 2.1,this shows that L occurs whp. As noted in (2.3), this together with a proofthat τ k > / G n,R ( τ k ) ∈ A k whp.It remains to prove (2.5). The graph G contains H by construction.If M ℓ ∩ L holds then G contains some k -graph F of size ℓ . If k is even,suppose F = F ∪ . . . F k/ for some disjoint paths and Hamilton cycles F i .Suppose without loss of generality that | F | < n . Then G contains thegraph G = H ∪ F \ ( F ∪ · · · ∪ F k/ ) ∈ SE . Let G p be the graph obtainedby independently adding any edge { u, v } to G with probability p ( u, v ), where p is some symmetric function. Letting p ( u, v ) = , u ∈ S ( t ) or v ∈ S ( t ) , , { u, v } ∈ F ∪ · · · ∪ F k/ , R ( u, v ) , otherwise , we have G p ⊆ G by Lemma 2.2. If s ( G p ) > s ( G ) then s k ( G ) ≥ s k ( G ∪ G p ) > s k ( G ), since G p is disjoint from F ∪ · · · ∪ F k/ ⊆ G .If k is odd and F = F ∪ · · · ∪ F ( k +1) / , assume without loss of generalitythat ∆( F ) ≤ i and | F | < in/ i ∈ { , } . Then G = H ∪ F \ ( F ∪ · · · ∪ F ( k +1) / ) ∈ SE , and s i ( G p ) > s i ( G ) implies s k ( G ) > s k ( G ).So, (2.5) follows from the following lemma. Lemma 2.3.
Let i ∈ { , } . Suppose R ∈ RM(1) and G ∈ SE i ( R ) with s i ( G ) < in/ , and suppose p ( u, v ) ≥ R ( u, v ) / for all { u, v } / ∈ E where P { u,v }∈ E R ( u, v ) = o ( n ln n ) . ThenPr { s i ( G p ) = s i ( G ) } ≤ e −√ n ln n . The remainder of the paper is mainly devoted to proving Lemma 2.1(Section 8) and Lemma 2.3 (Sections 4 through 7).7
Preliminaries
We state some preliminary, for the most part standard, results, and leavethe proofs for Section 9.
Recall that d R ( u ) = P v R ( u, v ). We will often assume that γ ( R ) = 1, andnote that this implies that d = Θ(ln n ) where d = min d R ( u ). Lemma 3.1.
Let R ∈ RM(1) .(i) For any integer D ≥ , u ∈ V and S ⊆ V with | V \ S | = O (1) , and t = Ω(1) , Pr { e t ( u, S ) < D } = exp {− td R ( u ) + O (ln d R ( u )) } . (ii) Let S ( t ) denote the set of vertices in G n,R ( t ) with degree less than D .If t = Ω(1) then | S ( t ) | = o ( n ) whp. Recall the definition γ k ( P ) = P u e − d P ( u ) d P ( u ) k − . Lemma 3.2.
Let k ≥ . Suppose P ∈ RM has γ k ( P ) → γ k ∈ [0 , ∞ ] . Then lim n →∞ Pr { δ ( G n,P ) ≥ k } = e − γ k . If R ∈ RM(1) and ε ≫ ln ln n ln n , then G n,R ( t ) has − ε < τ k < ε whp. We also note the following simple consequence of conditions (a) and (b)of Definition 1.1.
Lemma 3.3.
Suppose R ∈ RM has stationary distribution σ , and A ⊆ V .Then σ ( A ) = o (1) if and only if | A | = o ( n ) . For a matrix A indexed by V and sets S, T ⊆ V we write A ( S, T ) = X u ∈ S,v ∈ T A ( u, v ) , noting that pairs ( u, v ) with u, v ∈ S ∩ T are counted twice. We will use thefollowing version of the well-known Expander Mixing Lemma [2].8 emma 3.4. Suppose R ∈ RM has transition matrix M with stationarydistribution σ . Then for any A, B ⊆ V , M ( A, B ) = | A | σ ( B ) + O (cid:16) λ ( R ) p n | A | σ ( B ) (cid:17) . (3.1) In particular, the following holds.(i) R ( A, B ) = Ω( dn ) for any A, B ⊆ V with | A | , | B | = Ω( n ) .(ii) There exists a constant c > such that (cid:12)(cid:12)(cid:12)(cid:12)(cid:26) u ∈ A : M ( u, A ) ≥ (cid:18) | A | n (cid:19) c (cid:27)(cid:12)(cid:12)(cid:12)(cid:12) = o ( | A | ) for all | A | ≤ n/ . We consider the following lemma well-known, and state it without proof.
Lemma 3.5.
Suppose X , . . . , X m are independent exponential random vari-ables with finite respective rates r , . . . , r m > . Let r = r + · · · + r m andsuppose r i ≤ εr for all i , for some ε > . Let X ( D ) be the D –th smallestvalue in the family. Then for D ≤ / ε ,Pr (cid:8) X i ≤ X ( D ) (cid:9) ≤ D r i r . We will also use the following Chernoff bounds.
Lemma 3.6.
Suppose X is a finite set and let σ x ∈ [0 , for all x ∈ X . Let µ = P x ∈ X σ x .(i) Suppose S ⊆ X is a random set obtained by including any x ∈ X independently with probability σ x . If φ ≤ µ , thenPr {| S | ≤ φ } ≤ exp n − µ o . (3.2) (ii) Suppose T ⊆ X is a random set with Pr { A ⊆ T } ≤ Q x ∈ A σ x for all A ⊆ X . If φ ≥ µ , thenPr {| T | ≥ φ } ≤ (cid:18) µφ (cid:19) φ/ . (3.3) Proof.
Note that the condition on T implies that E (cid:2) | T | ℓ (cid:3) ≤ E (cid:2) | S | ℓ (cid:3) for all ℓ ≥
0. The bounds then follow by standard methods, see e.g. [12, Section21.4]. 9 .4 A matrix lemma
Suppose I is a totally ordered set, A a set and a = ( a , . . . , a ℓ ) ∈ A ℓ asequence in A . Say that a function f : I → A respects a if the sequence( f ( i )) i ∈ I is a subsequence of a . Lemma 3.7.
Suppose τ : I × J → A and a = ( a , . . . , a ℓ ) ∈ A ℓ , ℓ ≥ .Suppose I can be totally ordered so that i τ ( i, j ) respects a for each j ∈ J . Let π I , π J be finite measures on I, J , respectively. Then there exist S ⊆ I, T ⊆ J with π I ( S ) ≥ π I ( I ) /ℓ and π J ( T ) ≥ π J ( J ) /ℓ , such that τ isconstant on S × T . Let M be a transition matrix with stationary distribution σ with σ ( u ) > u ∈ V . For a probability measure π on V , define µ σ ( π ) = vuut X u ∈ V π ( v ) σ ( v ) ! − . Note that µ σ ( π ) ≥ π = σ . We use µ σ as ameasure of distance from stationarity. Lemma 3.8.
Suppose R ∈ RM has transition matrix M . Let π be aprobability measure on V , and define π ( v ) = P u π ( u ) M ( u, v ) . Then µ σ ( π ) ≤ λ ( R ) µ σ ( π ) . Suppose F = ( V, E ) is a graph, and R is a rate matrix on V . A walk W = ( w , w , . . . , w ℓ ) on V is F -alternating if { w i , w i +1 } ∈ F for all odd i ,and strictly F -alternating if also { w i , w i +1 } / ∈ F for all even i .We need a way to measure the size of a family of F -alternating walks.This will be slightly cumbersome to define. Firstly, for any walk W =( w , . . . , w ℓ ) define R alt [ W ] = ⌊ ℓ/ ⌋− Y j =0 R ( w j , w j +1 ) . For a family of walks W let R alt [ W ] = P W ∈W R alt [ W ].10or any edge set E with E ∩ G = ∅ we also define R G [ E ] = Y { u,v }∈ E R ( u, v ) . If E ∩ G = ∅ let R G [ E ] = 0. If E is a family of edge sets, let R G [ E ] = P E ∈E R G [ E ].For a walk W let odd( W ) be the set of edges {{ w i , w i +1 } : i ≥ } . Notethat if W is a walk which repeats no vertex and is strictly G -alternating,then R G [odd( W )] = R alt [ W ]. If E is an edge set of size r , then the numberof walks W with odd( W ) = E is 2 r r !. We conclude that if W is a family ofnon-repeating strictly G -alternating walks of length 2 r − W ) = { odd( W ) : W ∈ W} , then R G [odd( W )] ≥ r r ! R alt [ W ] . (4.1)For a walk W = ( w , . . . , w i ) let f ( W ) = w i denote its final vertex.For a family W of walks and v ∈ V let W → v be the set of W ∈ W with f ( W ) = v . Let ( W, v , . . . , v j ) = ( w , . . . , w i , v , . . . , v j ), and for two walks W = ( w , . . . , w i ) and W = ( w ′ , . . . , w ′ j ) write W ◦ W = ( w , . . . , w i , w ′ j , . . . , w ′ ) . We define a random walk on V as a probability measure π on the set V ∞ of infinite walks on V . For walks W of length ℓ , write π ( W ) = π ( W ( W ))where W ( W ) is the family of walks agreeing with W for the first ℓ steps.Define π ( w j +1 | w , . . . , w j ) = π ( w , . . . , w j +1 ) π ( w , . . . , w j )whenever π ( w , . . . , w j ) >
0. Define π ( v | W ) = 0 when π ( W ) = 0.Suppose G is a graph and R a rate matrix. Recall the definition of b N ( A )from Section 2.1. Starting at some (possibly random) initial point w , saythat a random walk π is an ( R, G ) -alternating random walk if for all i ≥ π ( w i +1 | w , . . . , w i ) = M ( w i , w i +1 ) ,π ( w i +2 | w , . . . , w i +1 ) = 0 , w i +2 / ∈ b N G ( w i +1 ) .
11e let π i denote the measure on V induced by w i . A special case is the simple, lazy ( R, G ) -alternating random walk π G defined by π G ( w i | w , . . . , w i − ) = 1 d G ( w i − ) + 1 , w i ∈ b N G ( w i − ) , for all i ≥
1. If the initial vertex x is specified, we denote the measure by π G,x .Note that if W = ( w , . . . , w j ) is a G -alternating walk and π a random( R, G )-alternating walk, then π ( W | w ) ≤ ⌈ j/ ⌉− Y i =0 R ( w i , w i +1 ) d R ( w i ) ≤ R alt [ W ] d ⌈ j/ ⌉ . (4.2)For rate matrices R definemix( R ) = (cid:24) − n k M k )ln λ ( R ) (cid:25) , (4.3)and note that mix( R ) = O (1) for R ∈ RM since then λ ( R ) = ( n k M k ) − Ω(1) .The name is in reference to the following lemma.
Lemma 4.1.
Suppose R ∈ RM , ∆( F ) ≤ , and that π is an ( R, F ) -alternating random walk. Suppose θ ≤ λ ( R ) − / tends to infinity with n ,and that j ≥ mix( R ) . Suppose c > is a constant. There exists a constant ρ > such that if W is a family of F -alternating walks W of length j , suchthat d R ( v ) < θd for all v ∈ W , and π ( W ) ≥ c , then there exists a vertex set U j of size at least ρn such that π ( W → u ) ≥ ρn , for all u ∈ U j . Say that a measure µ on V is near-uniform if there exists a constant c > µ ( v ) ≤ c/n for all v . Say that a random walk π is near-uniform if π is near-uniform. Note that for a near-uniform π , (4.2) showsthat for any family of walks W of length j , π ( W ) = X u π ( u ) π ( W | u ) ≤ cn R alt [ W ] d ⌈ j/ ⌉ (4.4)For S ⊆ V let τ ( S ) be the random time at which a walk first visits S , andlet τ odd ( S ) be the first odd index for which it occurs.12 emma 4.2. Suppose R ∈ RM and that G is light-tailed. Suppose π is anear-uniform random ( R, G ) -alternating walk.(i) If j ≥ is constant and | S | = o ( n ) , then π ( τ ( S ) ≤ j ) = o (1) .(ii) If | S | = Ω( n ) then π ( τ ( S ) ≤
1) = Ω(1) . Lemma 4.2 (i) is particularly interesting for S = S θ , which has | S θ | = o ( n ) if R ∈ RM and G is light-tailed. We let D θj denote the event that τ ( S θ ) ≤ j , and let Z θj ( c ) be the corresponding vertex set. Then | Z θj ( c ) | = o ( n ). Lemma 4.3.
Suppose R ∈ RM and that G is light-tailed. Let j ≥ . Let C θj be the set of G -alternating walks W = ( w , . . . , w j ) with d G ( w i ) < θ and d R ( w i ) < θd for all i , such that either (a) |{ w , . . . , w j }| < j + 1 or (b) W is not strictly G -alternating. If θ j k M k = o (1) then R alt [ C θj ] = o ( nd ⌈ j/ ⌉ ) . Suppose G is a graph on an even number n of vertices. Let s ( G ) ≤ ⌊ n/ ⌋ denote the size of the largest matching in G . Suppose s ( G ) < ⌊ n/ ⌋ , andlet F ( G ) be the family of matchings F with | F | = µ ( G ). For F ∈ F ( G ) letIP F be the set of vertices isolated by F , i.e. vertices x such that d F ( x ) = 0.Let IP = IP ( G ) be the set of vertex pairs { x, y } such that { x, y } ⊆ IP F for some F ∈ F ( G ). Lemma 4.4.
Suppose G ∈ SE with s ( G ) < ⌊ n/ ⌋ . Then | IP | ≥ ( βn ) / . Proof.
Let F ∈ F ( G ) with and x ∈ IP F . Let Y be the set of vertices y suchthat there exists an F -alternating walk ( x = w , . . . , w j = y ) of even length,with x ∈ Y . Then N G ( Y ) = N F ( Y ). Indeed, suppose v ∈ N G ( Y ) \ N F ( Y ),and let W = ( x, . . . , y ) be an even-length F -alternating walk such that yv ∈ G . If v ∈ IP F then W + v is an augmenting walk, contradictingthe maximality of F . If v / ∈ IP F then vw ∈ F for some w , and the walk( W, v, w ) shows that w ∈ Y , contradicting v / ∈ N F ( Y ). It follows that | N G ( Y ) | ≤ | N F ( Y ) | ≤ | Y | −
1. Since G expands, we then have | Y | ≥ βn .The same argument shows that for every y ∈ Y , there is a set | X y | ≥ βn such that { x, y } ∈ IP for each x ∈ X y . We conclude that | IP | ≥ ( βn ) /
2. 13or a graph G , integer r ≥
1, and θ tending to infinity, let T θr ( G ) be thefamily of edge sets | T | = r such that no e ∈ T is contained in G or incidentto S θ , and s ( G ∪ T ) > s ( G ). Proposition 4.5.
Suppose R ∈ RM and G ∈ SE with s ( G ) < ⌊ n/ ⌋ .Then there exists an r ≤ R ) + 1 such that if θ tends to infinity suffi-ciently slowly, then R G [ T θr ] = Ω( nd r ) . Proof.
Let AW θ r − ( G ) be the set of walks ( w , . . . , w r − ) which (a) repeatno vertex, (b) avoid S θ , (c) are F -alternating with w , w r − ∈ IP F for some F ∈ F ( G ), and (d) are strictly G -alternating. Then odd(AW θ r − ( G )) ⊆T θr , and (4.1) shows that it is enough to prove that R alt [AW θ r − ( G )] =Ω( nd r ) for some r ≤ R ) + 1 and θ .For F ∈ F ( G ) let A i ( F ) be the set of F -alternating walks ( w , w , . . . , w i )with w ∈ IP F and w i / ∈ IP F . Let ( w , . . . , w i ) ∈ B i ( F ) if ( w , . . . , w i − ) ∈A i − ( F ) and w i ∈ IP F . Let A i , B i be the walks which are in A i ( F ) , B i ( F )for some F ∈ F ( G ), respectively. Note that B i \ ( C i ∪ D θi ) ⊆ AW θi ( G ).Suppose x ∈ IP F for some F ∈ F ( G ). Let π F,x be the simple, lazy(
R, F )-alternating random walk with starting vertex x , as defined in Sec-tion 4.1. Then for all i ≥
0, it holds that π F,x ( A i +1 ∪ B i +1 ) = π F,x ( A i )and π F,x ( A i +2 ) ≥ π F,x ( A i +1 ). For any j ≥
1, we conclude that thereis a constant c j such that either π F,x ( B i − ) ≥ c j for some 1 ≤ i < j , or π F,x ( A j ) ≥ c j . We set j = mix( R ), as defined in (4.3).For each pair { x, y } ∈ IP pick some F ( x, y ) ∈ F ( G ) with { x, y } ∈ IP F .Define a random ( R, G )-alternating walk π by π = X { x,y }∈ IP | IP | (cid:16) π F ( x,y ) ,x π F ( x,y ) ,y (cid:17) . In other words, we pick a pair { x, y } ∈ IP uniformly at random, then pickone of x, y as our starting point with probability 1 /
2, and run the simple,lazy (
R, F ( x, y ))-alternating random walk. Note that π is near-uniform, as | IP | = Ω( n ). The one-sided case.
Suppose π ( B i − ) = Ω(1) for some i < j . Since B i − \ ( C i − ∪ D θ i − ) ⊆ AW θ i − , Lemmas 4.2 (i) and 4.3 imply π (AW θ i − ( G )) ≥ π ( B i − ) − π ( C i − ) − π ( D θ i − ) = Ω(1) . Then (4.4) gives R alt [AW θ i − ( G )] = Ω( nd i ).14 he two-sided case. Suppose π ( B i − ) < c j for all i < j . Let A θ j bethe set of walks in A j which avoid D θ j , and letXY = n { x, y } ∈ IP : π F ( x,y ) ,x ( A θ j ) ≥ c j π F ( x,y ) ,y ( A θ j ) ≥ c j o . Then | XY | = Ω( n ). Indeed, π ( D θ j ) = o (1) by Lemma 4.2 (i), so c j − o (1) ≤ π ( A θ j ) ≤ | XY || IP | c j | XY || IP | ≤ c j | XY || IP | . Fix some { x, y } ∈ XY and let F = F ( x, y ). Let A x,y ⊆ A θ j be theset of F -alternating walks in A θ j originating at x . Note that A x,y ◦ A y,x ⊆B j +1 \ D θ j +1 . We have R alt [ A x,y ◦ A y,x ] = X u,v R alt [ A → ux,y ] R ( u, v ) R alt [ A → vy,x ] . (4.5)By Lemma 4.1 there exists a constant ρ > | U x | ≥ ρn such that π F,x ( A → ux,y ) ≥ ρ/n for all u ∈ U x , and by (4.2) we have R alt [ A → ux,y ] ≥ d j π F,x ( A → ux,y ) = Ω (cid:18) d j n (cid:19) . Likewise, R alt [ A → vy,x ] = Ω( d j /n ) for all v ∈ U y where | U y | ≥ ρn . Then (4.5)and Lemma 3.4 imply R alt [ A x,y ◦ A y,x ] ≥ R ( U x , U y )Ω (cid:18) d j n (cid:19) = Ω (cid:18) d j +1 n (cid:19) . (4.6)Since A x,y ◦ A y,x \ C j +1 ⊆ AW θ j +1 ( G ), Lemma 4.3 gives R G [AW θ j +1 ( G )] ≥ X ( x,y ) ∈ XY R alt [ A x,y ◦ A y,x ] − R alt [ C j +1 ] = Ω( nd j +1 ) . .3 Paths and boosters We view paths P = ( x = v , . . . , v ℓ = y ) as being directed from x to y , andfor any P define a total ordering ≤ P of V by v i ≤ P v j whenever i ≤ j , and u ≤ P v whenever u ∈ P, v / ∈ P (arbitrarily ordering the vertices not on P ).Suppose z = ( z , z , . . . , z ℓ ) is a sequence of distinct vertices on P . Define τ ( z ) as the permutation of [ ℓ ] for which z τ (1) ≤ P z τ (2) ≤ P · · · ≤ P z τ ( ℓ ) . If z repeats a vertex or contains a vertex not on P , define τ ( z ) = ⊥ . For a pair of P -alternating walks ( X, Y ) with X = ( x, x , . . . , x i ) and Y = ( y, y , . . . , y j ),let τ ( X, Y ) = ⊥ if X has odd length and f ( X ) ∈ N P ( Y ), and τ ( X, Y ) = τ ( y , . . . , y j , x , . . . , x i ) otherwise (note that τ = ⊥ is possible in this caseas well). See Figure 1. Suppose P = ( v , . . . , v ℓ ) is a path of length ℓ , and let e = { v ℓ , v i } be an edgewith 0 < i < ℓ −
1. Then P △ ( v ℓ , v i , v i +1 ) = ( v , . . . , v i , v ℓ , v ℓ − , . . . , v i +1 ) isalso a path of length ℓ . We say that ( v ℓ , v i , v i +1 ) is a rotation walk of length2. In general, suppose P is a path with endpoints x and y , and suppose W is a non-repeating even-length strictly P -alternating walk starting at y andavoiding x . If P △ W is a path, we say that W is a rotation walk for P . Let R i ( P, y ) be the set of rotations walks for P of length 2 i with starting point y . Let R i +1 ( P, y ) be the set of walks ( w , . . . , w i +1 ) with ( w , . . . , w i ) ∈R i ( P, y ) and w i +1 ∈ P with dist P ( w i +1 , { w , . . . , w i , x } ) > W = ( w , . . . , w i +1 ) ∈ R i +1 ( P, y ) for some i ≥
0. Then thereis a unique vertex v ∈ N P ( f ( W )) for which ( W, v ) ∈ R i +2 ( P, y ), and we x yy y y y x x [1] [2] [3][4][5][6]Figure 1: Two walks X, Y with τ ( X, Y ) = (126543). If X ′ = ( X, x ), then τ ( X ′ , Y ) will take the following values in order as x increases from x to y and out of P : ⊥ , (7126543), ⊥ , (1276543), ⊥ , (1265743), ⊥ , (1265437), ⊥ .This sequence is the same for any compatible X, Y with the same τ ( X, Y ),though some values may be skipped.16efine r P ( W ) as the path ( W, v ). Note that τ ( r P ( W )) is fully determinedby τ ( W ), as v is the immediate successor of u along the path P △ W , viewedas going from x to f ( W ).We will need to consider pairs of rotation walks starting at x and y .Let A i ( P, x ) be the set of P -alternating walks of length i starting at x .For X ∈ A i ( P, x ) and Y ∈ R j ( P, y ), say that (
X, Y ) is a compatible pair if X ∈ R i ( P △ Y, x ), no vertex appears twice in X ∪ Y , and dist P ( f ( X ) , Y ) > X has odd length. For a compatible pair ( X, Y ) let r P,Y ( X ) = r P △ Y ( X )be the unique walk ( X, v ) for which ( r P ( X ) , Y ) is compatible. Note that τ ( r P,Y ( X ) , Y ) is fully determined by τ ( X, Y ).We summarize this with a lemma. For families of walks X and Y , saythat ( X , Y ) is compatible if every ( X, Y ) ∈ X × Y is compatible, and thereexists a permutation τ such that τ ( X, Y ) = τ for all ( X, Y ) ∈ X × Y . Lemma 4.6.
Let i, j ≥ . Suppose X ⊆ A i +1 ( P, x ) and Y ⊆ R j ( P, y ) aresuch that ( X , Y ) is compatible. Let r P, Y ( X ) = { r P,Y ( X ) : X ∈ X , Y ∈ Y} ⊆A i +2 ( P, x ) . Then ( r P, Y ( X ) , Y ) is compatible.Proof. This follows from the fact that τ ( r P,Y ( X ) , Y ) is fully determined by τ ( X, Y ), which is constant on
X × Y .The relevance of compatible walks is this: if ( X , Y ) is a compatible pairof families of even-length walks, then P △ ( X ◦ Y ) is a cycle of length ℓ + 1for every ( X, Y ) ∈ X × Y . Suppose G ∈ SE has s ( G ) < n , and let F ( G ) be the set of paths andcycles F ⊆ G with | F | = s ( G ). We first take care of a special case. Lemma 4.7.
Suppose G ∈ SE and that F ( G ) contains a cycle. Let BW ( G ) denote the set of edges e such that s ( G + e ) > s ( G ) . Then R G [BW ( G )] = Ω( dn ) .Proof. The family F ( G ) contains a cycle C only if the vertex set U of C forms a connected component in G . Since G expands, any connectedcomponent has size between βn and (1 − β ) n . Since U × U ⊆ BW ( G ), byLemma 3.4 we have R G [BW ( G )] ≥ R ( U, U ) = Ω( dn ).Now suppose F ( G ) contains only paths. For U ⊆ V , let EP ( U ) be theset of ordered pairs ( x, y ) such that some P ∈ F ( G ) has endpoints x and y , and V ( P ) = U . 17 emma 4.8. Suppose G ∈ SE with s ( G ) < n . Then there exists some U ⊆ V with | EP ( U ) | ≥ ( βn ) .Proof. P´osa’s lemma (see e.g. [12]) shows that if { x, y } ∈ EP ( U ) for some x, y , then the set Y = { y : { x, y } ∈ EP ( U ) } has | N G ( Y ) | < | Y | . Since G ∈ SE we then have | Y | ≥ βn , and the lemma follows just like in Lemma 4.4,this time considering ordered pairs.We prove the analogue of Proposition 4.5 for paths. For a graph G ,integer r ≥
1, and θ tending to infinity, let T θr ( G ) be the family of edgesets | T | = r such that no e ∈ T is contained in G or incident to S θ , and s ( G ∪ T ) > s ( G ). Proposition 4.9.
Suppose R ∈ RM and G ∈ SE with s ( G ) < n . Thenthere exists an r ≤ R ) + 1 such that if θ tends to infinity sufficientlyslowly, then R G [ T θr ] = Ω( nd r ) . Proof.
Lemma 4.7 takes care of the case when F ( G ) contains a cycle; it onlyremains to note that the set E θ of edges incident to S θ has R [ E θ ] = o ( nd )by the degree condition (1.1). Assume F ( G ) contains no cycles.For a graph G and integer r ≥
0, let BW θ r − ( G ) be the family of walks W of length 2 r − W is P -alternating for some P ∈ F ( G ),(b) W is strictly G -alternating, (c) no vertex appears more than once in W ,(d) W avoids S θ , and (e) s ( P △ W ) > s ( P ). By (4.1), it is enough to showthat R alt [BW θ r − ( G )] = Ω( nd r ) for some r ≤ R ) + 1.Suppose P is a path from x to y on vertex set U . Define R i ( P, y ) as inSection 4.3.1. For i ≥
0, let B i +1 ( P, y ) be the set of walks ( w , . . . , w i +1 )with ( w , . . . , w i ) ∈ R i and w i +1 ∈ { x } ∪ U , and note that s ( P △ W ) >s ( P ) for W ∈ B i +1 .We define rot P,y as a random (
R, P )-alternating walk initated at y withrot P,y ( r P ( Y ) | Y ) = 1 , Y ∈ R i +1 ( P, y ) . This satisfies rot
P,y ( R i +2 ) = rot P,y ( R i +1 ) , rot P,y ( R i +1 ∪ B i +1 ) ≥ (1 − o (1))rot P,y ( R i ) . Indeed, if W = ( w , . . . , w i +1 ) / ∈ R i +1 ∪ B i +1 while ( w , . . . , w i ) ∈ R i ,then dist P ( w i +1 , { w , . . . , w i , x } ) <
2, which happens with probability18 (1). We conclude that for any constant j ≥ P,y ( R j ) + j − X i =0 rot P,y ( B i +1 ) = 1 − o (1) . For each ( x, y ) ∈ EP ( U ) pick some P ( x, y ) ∈ F ( G ) on U , with P ( x, y ) and P ( y, x ) each other’s reverses. Define a random ( R, G )-alternating walkrot = X ( x,y ) ∈ EP ( U ) | EP ( U ) | rot P ( x,y ) ,y . Since | EP ( U ) | = Ω( n ), this is near-uniform. Let j = mix( R ), and pick θ ≤ λ ( R ) − / so that θ j +1 k M k = o (1). The one-sided case.
Suppose s ( G ) = n − Ω( n ). Then | U | = Ω( n ),and { τ ( U ) ≤ } ⊆ B . By Lemma 4.2, we haverot( B \ D θ ) = Ω(1) . Since rot is near-uniform, (4.4) shows that R alt [ B \ D θ ] = Ω( nd ). Since B \ D θ ⊆ BW θ ( G ), that finishes this case. The two-sided case.
Suppose s ( G ) = n − o ( n ). Then U = o ( n ), andLemma 4.2 gives rot( τ ( U ∪ S θ ) ≤ j ) = o (1), so rot( R j ) = 1 − o (1). Thenthere exists a set XY ⊆ EP such that all ( x, y ) ∈ XY have rot P ( x,y ) ,x ( R θ j ) =1 − o (1) and rot P ( x,y ) ,y ( R θ j ) = 1 − o (1).Fix ( x, y ) ∈ XY and let P = P ( x, y ), directed from x to y . We willconstruct a compatible pair ( X j , Y j ) of families of walks of length 2 j . Then X j ◦ Y j ⊆ B j +1 ( P, y ), and we can apply the same techniques that we usedfor matchings.
Construction of X i , Y i . Initially let X = { ( x ) } . Let τ be a per-mutation such that rot P,y ( τ (( x ) , Y ) = τ ) ≥ rot P,y ( R j ( P, y )) / j , and let Y be the set of Y ∈ R j ( P, y ) with τ (( x ) , Y ) = τ . For i = 1 , . . . , j weinductively construct compatible families X i , Y i with rot P,x ( X i ) ≥ c i,j androt P,y ( Y i ) ≥ c i,j , where c i,j = (3 j ) − i for i = 1 , . . . , j . Construction, odd i . Suppose ( X i − , Y i − ) is a compatible pair offamilies. Define X ′ i as the set of one-step extensions of X i − : X ′ i = { ( x , . . . , x i − , v ) : ( x , . . . , x i − ) ∈ X i − , v ∈ V } . We aim to apply Lemma 3.7 to τ ( X, Y ) for (
X, Y ) ∈ X ′ i × Y i − . In orderto do this, we need to define a total ordering on X ′ i such that X τ ( X, Y )respects some common sequence for all Y ∈ Y i − .19et ≤ τ be some arbitrary ordering of the set τ ( X ′ i ) = { τ ( X ) : X ∈ X ′ i } .Note that | τ ( X ′ i ) | ≤ i +1. Define a total ordering ≤ on X ′ i such that X ≤ X whenever τ ( X ) ≤ τ τ ( X ), or τ ( X ) = τ ( X ) and f ( X ) ≤ P f ( X ).Let τ ′ ∈ τ ( X ′ i ) and consider the set X ′ i ( τ ′ ) of X ∈ X ′ i with τ ( X ) = τ ′ . Restricted to this set, our ordering orders walks by their final vertex.Suppose X , X ∈ X ′ i ( τ ′ ) and Y ∈ Y i − are such that f ( X ) ≤ P f ( X ) withno vertex of Y in the interval [ f ( X ) , f ( X )] (note that no vertex of X ∪ X is in this interval as then τ ( X ) = τ ( X )). Then τ ( X , Y ) = τ ( X , Y ).As X runs through X ′ i ( τ ′ ) according to the ordering ≤ , the value of τ ( X, Y ) changes only when f ( X ) enters or exits b N P ( Y ), which happens atmost 2 j +1 times. This shows that the map X ( X, Y ), restricted to X ′ i ( τ ′ ),respects some sequence of length at most 2 j + 2, common to all Y ∈ Y i (seeFigure 1). Since | τ ( X ′ i ) | ≤ i + 1, the sequence X τ ( X, Y ) on all of X ′ i respects some common sequence of length at most ℓ = 2 i ( j + 1) ≤ (3 j ) .We apply Lemma 3.7 with measures rot P,x and rot
P,y on X ′ i and Y i − ,respectively. The lemma asserts that there exist sets X i ⊆ X ′ i and Y i ⊆ Y i − such that τ is constant on X i × Y i , androt P,x ( X i ) ≥ rot P,x ( X i − ) ℓ and rot P,y ( Y i ) ≥ rot P,y ( Y i − ) ℓ . By induction, both quantities are at least c i − ,j /ℓ ≥ c i,j . Note that if τ ( X, Y ) = ⊥ then either X ∈ B i ( P △ Y, x ), or dist P ( f ( X ) , Y ) <
2. The latterhas probability O ( k M k| Y | ) = o (1). Since ( x, y ) ∈ XY, for any Y ∈ Y i − wethen have rot P,x ( τ ( · , Y ) = ⊥ ) ≤ rot P,x ( τ ( U ) ≤ i ) + o (1) = o (1) . We conclude that the common value of τ on X i × Y i is not ⊥ , so ( X i , Y i ) iscompatible. Construction, even i . Suppose X i − , Y i − have been constructed.Lemma 4.6 shows that X i = r P, Y i − ( X i − ) and Y i = Y i − are compatible.We have rot P,x ( X i ) = rot P,x ( X i − ). Gluing.
We proceed exactly as in (4.6) to obtain R alt [ X j ◦ Y j ] = X u,v R alt [ X → u j ] R ( u, v ) R alt [ Y → v j ] = Ω (cid:18) d j +1 n (cid:19) . Since X j ◦Y j \ ( C j +1 ∪ D θ j +1 ) ⊆ BW θ j +1 ( G ) for any ( x, y ) ∈ XY, summingover ( x, y ) ∈ XY and applying Lemmas 4.2 (i) and 4.3 gives R alt [BW θ j +1 ( G )] = Ω( nd j +1 ) − o ( nd j +1 ) . Sprinkling
Suppose X is a finite set, p : X → [0 ,
1] a function, and G = ( X, E ) a graph.Let X p ⊆ X be the random set obtained by independently including any x ∈ X with probability p ( x ). Suppose for some r ≥ T is a set ofpaths on r vertices in G . We are interested in the probability that some T ∈ T is contained in X p .For a set A ⊆ X write p ( A ) = P x ∈ A p ( x ) and p [ A ] = Q x ∈ A p ( x ), and fora family A of sets write p [ A ] = P A ∈A p [ A ]. Let ∆ be the maximum valueof p ( N G ( x )) for x ∈ X . Lemma 5.1.
Suppose r ≥ is an integer, and T is a set of paths on r vertices in G = ( X, E ) , such that p [ T ] ≥ r − p ( X ) . ThenPr { T * X p , all T ∈ T } ≤ exp (cid:26) − p [ T ](3∆) r − r r (cid:27) . We begin our proof with the following lemma.
Lemma 5.2.
Suppose r > . Let T be the random set of paths ( x , . . . , x r ) such that ( x , . . . , x r ) ∈ T for some x ∈ X p . If p [ T ] ≥ r − p ( X ) , thenPr (cid:26) p [ T ] < p [ T ]3∆ (cid:27) ≤ exp (cid:26) − p [ T ]3∆ r − (cid:27) . Proof.
For x ∈ X let Q ( x ) be the set of ( x , . . . , x r ) such that ( x, x , . . . , x r ) ∈T . For S ⊆ X let Q ( S ) = ∪ x ∈ S Q ( x ). For S ⊆ X and x / ∈ S , say that x is S -useful if p [ Q ( x ) \ Q ( S )] ≥ p [ T ]3 p ( X ) . Claim 5.1.
Suppose S ⊆ X has p ( Q ( S )) < p [ T ] / . Then the set U ⊆ X \ S of S -useful elements has p ( U ) ≥ p [ T ] / r − .Proof of Claim 5.1. For T = ( x , . . . , x r ) let Q − ( T ) be the set of x suchthat ( x, x , . . . , x r ) ∈ T . Then Q − ( T ) ⊆ N G ( x ), and p ( Q − ( T )) ≤ ∆. Let M ( S ) be the set of ( x , . . . , x r ) ∈ T with ( x , . . . , x r ) / ∈ Q ( S ). Then p [ M ( S )] ≥ p [ T ] − ∆ p ( Q ( S )) ≥ p [ T ] . (5.1)For any x ∈ X we have p [ Q ( x )] ≤ ∆ r − , so p [ M ( S )] ≤ X x/ ∈ S p ( x ) p [ Q ( x ) \ Q ( S )] ≤ p ( U )∆ r − + p ( X \ S ) 13 p [ T ] p ( X ) ≤ p ( U )∆ r − + 13 p [ T ] . (5.2)21ombining (5.1) and (5.2) gives p ( U ) ≥ p [ T ] / r − .Consider sampling S ⊆ X p by the following procedure.1. Initially let S = ∅ and Z = ∅ . Set i = 1.2. Let x i / ∈ Z i − be an S i − -useful element. Let Z i = Z i − ∪ { x i } , andwith probability p ( x i ) let S i = S i − ∪ { x i } , otherwise let S i = S i − .3. If p [ Q ( S i )] ≥ p [ T ] / success and end the procedure. If p ( Z i ) ≥ p [ T ] / r − , declare failure and end the procedure. Other-wise, increase i by 1 and go to step 2.By Claim 5.1, Step 2 can be carried out as long as neither success nor failure has been declared, since then p ( U \ Z i − ) > U is the setof S i − -useful elements.Since each x i is S i − -useful at time of sampling, we have p [ Q ( S ℓ )] ≥ p [ T ]3 p ( X ) ℓ X i =1 ξ i , where the ξ i are independent indicator random variables. Letting ξ ( ℓ ) = P i ≤ ℓ ξ i , we have E [ ξ ( ℓ )] = p ( Z ℓ ). If failure is declared, there exists some ℓ for which p ( Z ℓ ) ≥ p [ T ] / r − while ξ ( ℓ ) < p ( X ) / ∆ = o ( p ( Z ℓ )). By theChernoff bound (3.2) we havePr (cid:26) ξ ( ℓ ) < p ( X )∆ and p ( Z ℓ ) ≥ p [ T ]3∆ r − (cid:27) ≤ exp (cid:26) − p [ T ]24∆ r − (cid:27) . Since p [ T ] ≥ p [ Q ( S ℓ )], the lemma follows.We can now prove Lemma 5.1. Proof of Lemma 5.1. If r = 1 then T is a collection of elements of X , andPr {T ∩ X p = ∅} = Y x ∈T (1 − p ( x )) ≤ e − p [ T ] . (5.3)Suppose r >
1. Let X , . . . , X r be independent random subsets of X , eachsampling any x ∈ X with probability p ′ ( x ) = p ( x ) /r . Then any x is inde-pendently in X ∪ · · · ∪ X r with probability 1 − (1 − p ( x ) /r ) r ≤ p ( x ).Let T = T , and for 0 < i < r let T i be the random set of ( x i +1 , . . . , x r )such that ( x , . . . , x i , x i +1 , . . . , x r ) ∈ T for some x j ∈ X j , 1 ≤ j ≤ i .22et E i denote the event that p ′ [ T i ] ≥ p ′ [ T ] / (3∆) i . Lemma 5.2 shows thatfor 0 < i < r , Pr (cid:8) E i | E i − (cid:9) ≤ exp (cid:26) − i ∆ r − p ′ [ T ] (cid:27) . We then havePr (cid:8) E r − (cid:9) ≤ r − X i =1 Pr (cid:8) E i | E i − (cid:9) ≤ ( r −
1) exp ( − (cid:18) (cid:19) r − p [ T ] ) . Finally, note that T r − is a set of elements in X . Repeating the argumentbehind (5.3) givesPr { T * X p , all T ∈ T | E r − } ≤ exp ( − (cid:18) (cid:19) r − p ′ [ T ] ) . With p ′ [ T ] = p [ T ] /r r , this finishes the proof. We can now prove Lemma 2.3. Suppose R ∈ RM and G ∈ SE i with s i ( G ) < ⌊ in/ ⌋ for some i ∈ { , } . Note that d = Θ(ln n ) since γ ( R ) = 1. Let θ tend to infinity arbitrarily slowly, and let E θ be the set of edges incident to S θ = S θ ( R, G ). Propositions 4.5 and 4.9 show, for i = 1 , r ≤ R ) + 1 and a set T r of edge sets T with | T | = r and T ∩ ( G ∪ E θ ) = ∅ such that R G [ T r ] = X T ∈T r Y uv ∈ T R ( u, v ) = Ω( nd r ) . Suppose E is an edge set with R ( E ) = o ( n ln n ), and that p satisfies p ( u, v ) ≥ R ( u, v ) / { u, v } / ∈ E . Let X = (cid:0) V (cid:1) \ ( G ∪ E θ ). We then havePr { s i ( G p ) = s i ( G ) } = Pr { T * X p , all T ∈ T r } . Let G = ( X, E ) be the graph on X where u v , u v ∈ X are adjacent if G contains an edge between { u , v } and { u , v } . Then∆ = max uv ∈ X p ( N G ( uv )) ≤ θd. T r ( E ) be the set of T ∈ T r which intersect E . Picking θ so that R ( E ) θ r − = o ( nd ), we have R G [ T r ( E )] ≤ R ( E )∆ r − = o ( nd r ) . It follows that p [ T r ] ≥ R G [ T r \ T r ( E )] = Ω( nd r ) . Note that p ( X ) = O ( nd ). Lemma 5.1 then givesPr { T * X p , all T ∈ T r } = exp (cid:26) − Ω (cid:18) nd r ∆ r − (cid:19)(cid:27) = exp (cid:26) − Ω (cid:18) ndθ r − (cid:19)(cid:27) . Letting θ r − = o ( √ d ) and recalling that d = Θ(ln n ) finishes the proof. Suppose G is a graph and R a rate matrix on V , with associated transitionmatrix M . In Section 4.1 we defined the simple, lazy ( R, G )-alternatingrandom walk, which is a special case of the following definition. Recall that b N ( A ) = A ∪ N ( A ). Definition 7.1.
Given a graph G , a random walk π on V is a random( R, G )-alternating walk if the following hold for all j ≥ : π ( w j +1 | w , . . . , w j ) = M ( w j , w j +1 ) ,π ( w j +2 | w , . . . , w j +1 ) = 0 , w j +2 / ∈ b N G ( w j +1 ) . In short, a random (
R, G )-alternating walk alternates between memory-less transitions weighted by M , and (lazy) steps restricted to the edges of G . We use π j to denote the measure induced by the j –th vertex w j , andnote that the initial distribution π may be any distribution on V .Before going into mixing of the random ( R, G )-alternating walk, we re-state and prove Lemma 4.3. Define S θ as the set of vertices u with d G ( u ) ≥ θ or d R ( u ) ≥ θd , and say that a walk avoids S θ if it contains no vertex of S θ . Lemma 7.2.
Suppose R ∈ RM and that G is light-tailed, and let j = O (1) .Let C θj be the set of G -alternating walks of length j which avoid S θ and either(a) repeat some vertex, or (b) are not strictly G -alternating. If θ j +1 k M k = o (1) then R alt [ C θj ] = o ( nd ⌈ j/ ⌉ ) . roof. If a G -alternating walk W = ( w , . . . , w j ) repeats a vertex or has { w i , w i +1 } ∈ G for some i , there must exist some i such that w i +1 ∈ b N G ( w , . . . , w i ). If d G ( w i ) < θ for all i , then M ( w i , b N G ( w , . . . , w i )) ≤ k M k (2 i + 1) θ, so π ( C θj ) = O ( θ k M k ) for any random ( R, G )-alternating walk π . Note thatfor any walk W = ( w , . . . , w j ) which avoids S θ , R alt [ W ] ≤ ⌈ j/ ⌉− Y i =0 θdd R ( w i ) R ( w i , w i +1 ) ⌊ j/ ⌋− Y i =0 θd G ( w i − ) + 1 ≤ θ j d ⌈ j/ ⌉ × nπ G ( W ) , where π G is the simple, lazy ( R, G )-alternating walk initiated at a ver-tex chosen uniformly at random. Since θ j +1 k M k = o (1), it follows that R alt [ C θj ] = o ( nd ⌈ j/ ⌉ ). The R -steps of the ( R, G )-alternating walk gets π j closer to stationarityby Lemma 3.8, while the G -steps may pull it back. The following lemmabounds the harm done. Lemma 7.3.
Suppose R ∈ RM with λ ( R ) = λ and stationary distribution σ . Suppose π is a random ( R, G ) -alternating walk for some graph G withmaximum degree θ − and d G ( u ) = 0 whenever d R ( u ) ≥ θd . Then for i ≥ , µ σ ( π i +1 ) ≤ λµ σ ( π i ) , (7.1) µ σ ( π i +2 ) ≤ θ (cid:0) µ σ ( π i +1 ) + 1 (cid:1) . (7.2) In particular, if λθ = o (1) then for all i ≥ , µ σ ( π i +1 ) ≤ λ i θ i µ σ ( π ) + O ( λ θ ) . (7.3) Proof.
Lemma 3.8 immediately gives (7.1), and we prove (7.2). Note thatfor any i ≥ u ∈ V , π i ( u ) = X v ∈ b N ( u ) π i − ( v ) π ( w i = u | w i − = v ) ≤ θ max v ∈ b N ( u ) π i − ( v ) . v ∈ b N ( u ). Then u = v or d R ( u ) ≤ θd , since d G ( u ) = 0 whenever d R ( u ) ≥ θd . Since d R ( v ) ≥ d , in either case we conclude that σ ( u ) /σ ( v ) = d R ( u ) /d R ( v ) ≤ θ , and µ σ ( π i ) ≤ X v π i ( v ) σ ( v ) ≤ X v θ σ ( v ) max u ∈ b N ( v ) π i − ( u ) . Any vertex u is counted at most θ times in this sum, so µ σ ( π i ) ≤ θ X u π i − ( u ) σ ( u ) max v ∈ b N ( u ) σ ( u ) σ ( v ) ≤ θ (cid:0) µ σ ( π i − ) + 1 (cid:1) . This shows (7.2). Repeatedly applying (7.1) and (7.2) with λθ = o (1) gives(7.3).Recall that for a family W of walks and a vertex v , W → v is the set ofwalks in W ending at v . Say that a walk W = ( w , . . . , w j ) is non-lazy if w i = w i +1 for all 0 ≤ i < j .For any random ( R, G )-alternating walk π we define a variant π θ bythe following holding for any W = ( w , . . . , w j − ): if w j − ∈ S θ then w j = w j − , and if w j − / ∈ S θ then π θ ( w j | W ) = , w j ∈ S θ ,π ( w j | W ) + P v ∈ S θ π ( v | W ) , w j = w j − ,π ( w j | W ) , w j / ∈ { w j − } ∪ S θ . (7.4)Then π θ is a random ( R, G θ )-alternating walk, where G θ ⊆ G is obtainedfrom G by removing any edge incident to S θ . The walk π θ is designed tosatisfy the conditions of Lemma 7.3 as well as satisfying π θ ( W ) = π ( W ) forany family W of walks which either (a) are non-lazy and avoid S θ , or (b)have w i / ∈ b N ( S θ ) for odd i . Lemma 7.4.
Suppose R ∈ RM with λ ( R ) = λ , suppose F is a graph withmaximum degree ∆( F ) ≤ , and suppose π is a random ( R, F ) -alternatingwalk. Suppose θ ≤ λ − / tends to infinity with n , and let j ≥ − ln( n k M k )ln λ be an integer. Let c > be constant. Suppose W is a set of non-lazy S θ ( R, F ) -avoiding walks of length j , such that π ( W ) ≥ c . Then thereexists a constant ρ > such that there exists a set | U j | ≥ ρn , such that any u ∈ U j has π ( W → u ) ≥ ρ/n .Proof. Let S θ = S θ ( R, F ) and consider the walk π θ defined above. Since θ ≤ λ − / and λ = o (1), Lemma 7.3 implies that µ σ ( π θ j − ) ≤ λ j − µ σ ( π θ ) + O ( λ ) . (7.5)26e have π θ ( u ) = P v π θ ( v ) M ( v, u ) ≤ k M k for all u , and since σ ( u ) ≥ /bn for all u , µ σ ( π ) ≤ X u π ( u ) σ ( u ) ≤ bn k M k . So for j ≥ − n k M k ) / ln( λ ), (7.5) becomes µ σ ( π θ j − ) = O ( λ ). Define T as the set of vertices u with | π θ j − ( u ) − σ ( u ) | < λ / σ ( u ). Then bydefinition of µ σ , O ( λ ) = µ σ ( π θ j − ) ≥ X v / ∈ T π θ j − ( v ) σ ( v ) − ! σ ( v ) ≥ λ / σ ( T ) . Then σ ( T ) = 1 − O ( λ / ). By definition of T , π θ j − ( T ) ≥ (1 − λ / ) σ ( T ) = 1 − O ( λ / ) , and for any vertex set A , π θ j − ( A ) ≤ (1 + λ / ) σ ( A ∩ T ) + π θ j − ( T ) ≤ σ ( A ) + O ( λ / ) . (7.6)Let W j − ⊆ W be the set of walks obtained by removing the final vertexfrom walks in W . Note that c ≤ π ( W j − ) = π θ ( W j − ). For any v ∈ V let W → v j − be the set of walks in W j − which end at v , and let U j − be the setof v such that π θ ( W → v j − ) ≥ c/ n . Then c ≤ π θ ( W j − ) = X v π θ ( W → v j − ) ≤ c n | U j − | + π θ j − ( U j − ) ≤ c σ ( U j − ) + O ( λ / ) . Since λ = o (1), we conclude that σ ( U j − ) ≥ c/
3. By Lemma 3.3 we thenhave | U j − | ≥ c ′ n for some constant c ′ >
0. Let U j = (cid:26) v : π θ ( W → v ) ≥ c n (cid:27) . Since ∆ F ≤
2, every u ∈ U j − has at least one neighbour in U j , countingself-loops, and | U j | ≥ | U j − | / ≥ c ′ n/
3. Since π ( W → v ) = π θ ( W → v ) for all v , this finishes the proof with ρ = min { c ′ / , c/ } .27 .2 Hitting times for sets For a random walk ( w , w , . . . ) and S ⊆ V , recall that τ ( S ) is the smallest i for which w i ∈ S . Lemma 7.5.
Suppose R ∈ RM and that G is light-tailed. Suppose π is arandom ( R, G ) -alternating walk with π ( u ) = 1 / | A | for u ∈ A , where A ⊆ V has size | A | = Ω( n ) .(i) If j ≥ is constant and | S | = o ( n ) , then π ( τ ( S ) ≤ j ) = o (1) . (ii) If | S | = Ω( n ) then π ( τ ( S ) ≤
1) = Ω(1) . Proof.
We prove (i). Let θ = o ( n/ | S | ) tend to infinity with n . We mayassume that S θ ⊆ S , since replacing S by S ∪ S θ only increases the probabilityin question. Note that since G is light-tailed, | b N ( S ) | ≤ | b N ( S \ S θ ) | + | b N ( S θ ) | ≤ θ | S | + o ( n ) = o ( n ) . Note that either τ ( S ) = 0 or τ ( S ) ≥ τ odd ( b N ( S )). We then have π ( τ ( S ) ≤ j ) ≤ π ( S ) + π ( τ odd ( b N ( S )) < j | S ) . The first term equals | S | / | A | = o (1).Let π θ be the modification of π defined in (7.4). Note that since S θ ⊆ S ,any walk W ∈ { τ odd ( b N ( S )) < j } has π θ ( W ) = π ( W ). Since σ ( u ) ≥ /bn for all u for some constant b ≥ µ σ ( π θ ) = X u ∈ Z (1 / | Z | ) σ ( u ) ! − ≤ bn | Z | − O (1) . By Lemma 7.3 and since θ ≤ λ − / , for any odd i ≥ µ σ ( π θi ) ≤ µ σ ( π θ ) + O ( λ ) = O ( λ ). As in (7.6), we have π θi ( b N ( S )) ≤ σ ( b N ( S )) + o (1).Then π ( τ ( S ) ≤ j ) ≤ o (1) + π θ ( τ odd ( b N ( S )) < j ) ≤ X i< ji odd π θi ( b N ( S )) = o (1) . Part (ii) follows from µ σ ( π ) = O ( λ ). Applying (7.6) to the complementof S , we have π ( τ ( S ) ≤ ≥ π ( S ) ≥ σ ( S ) − o (1) . A low-degree expander
Recall that G n,R ( t ) is constructed by letting E ( u, v ) be independent expo-nential random variables with rate R ( u, v ) for all { u, v } , including any edge { u, v } with E ( u, v ) ≤ t (note that E ( u, v ) = E ( v, u )). Let D ≥ k be aninteger and define T D ( u ) = inf { t > |{ v : E ( u, v ) ≤ t }| ≥ D } be the random time at which u attains degree D . We define a graph H ( t ) ⊆ G n,R ( t ) by including an edge { u, v } whenever E ( u, v ) ≤ t and E ( u, v ) ≤ max { T D ( u ) , T D ( v ) } , and let H = H ( τ k ). Lemma 8.1.
Suppose R ∈ RM and k ≥ . There exists some D = O (1) such that the following hold.(i) Let θ tend to infinity with n . Letting S θ denote the set of u with d H ( u ) ≥ θ or d R ( u ) ≥ θd , with high probability | b N H ( S θ ) | = o ( n ) .(ii) There exists a constant β = β ( R, k ) > such that with high probability,every | A | < βn has | N H ( A ) | ≥ k | A | . We prove Lemma 8.1 over the next few sections. H and the D -out graph For all ordered pairs ( u, v ), let X ( u, v ) be independent exponential randomvariables with rate R ( u, v ) /
2. Define T + D ( u ) = inf { t > |{ v : X ( u, v ) ≤ t }| ≥ D } ,T − D ( v ) = inf { t > |{ u : X ( u, v ) ≤ t }| ≥ D } . Define two undirected graphs on V by H + D = {{ u, v } : X ( u, v ) ≤ T + D ( u ) } ,H − D = {{ u, v } : X ( u, v ) ≤ T − D ( v ) } . Then H + D and H − D are equal in distribution, the common distribution be-ing the R -weighted D -out random graph G R,D , defined as follows. Each u independently samples D vertices N ( u ) chosen without replacement withprobability proportional to R ( u, · ). Let ~ G R,D be the graph with edges ( u, v )for v ∈ N ( u ). Then G R,D is obtained by ignoring orientations and mergingparallel edges in ~ G R,D . 29e couple H + D and H − D to H by letting E ( u, v ) = min { X ( u, v ) , X ( v, u ) } .Then H ⊆ H + D ∪ H − D . Indeed, suppose { u, v } ∈ H . If X ( u, v ) ≤ X ( v, u )then X ( u, v ) = E ( u, v ) ≤ max { T D ( u ) , T D ( v ) } ≤ max { X + D ( u ) , X − D ( v ) } , so { u, v } ∈ H + D ∪ H − D . The same argument with the signs reversed holds if X ( v, u ) ≤ X ( u, v ). H We have S θ = A θ ∪ B θ where A θ = { v : d H ( v ) ≥ θ } and B θ = { u : d R ( u ) ≥ θd } , and | b N ( S θ ) | ≤ | b N ( B θ \ A θ ) | + | b N ( A θ ) | ≤ θ | B θ | + | b N ( A θ ) | . Letting c = 1 / D , we bound | b N H ( A θ ) | ≤ X ℓ ≥ θ ( ℓ + 1) |{ v : d H ( v ) = ℓ }| = θ | A θ | + X ℓ ≥ θ | A ℓ |≤ θ | A θ \ B cθ | + θ | B cθ | + X ℓ ≥ θ ( | A ℓ \ B cℓ | + | B cℓ | ) . We bound | A ℓ \ B cℓ | . The discussion in Section 8.1 shows that H ⊆ H + D ∪ H − D where H + D d = H − D d = G R,D . Letting d ( u ) denote degrees in G R,D ,Pr { d H ( u ) ≥ ℓ } ≤ { d ( u ) ≥ ℓ/ } . Let X u be the number of vertices v with u ∈ N ( v ). If d ( u ) ≥ ℓ/ X u ≥ ℓ/ − D ≥ ℓ/
4. Vertices v = u independently have u ∈ N ( v ) with probabilityat most DM ( v, u ) by Lemma 3.5. Since M ( v, u ) = d R ( u ) M ( u, v ) /d R ( v ), and d R ( u ) /d R ( v ) ≤ ℓ/ D for u / ∈ B cℓ , we have E [ X u ] = X v Pr { u ∈ N ( v ) } ≤ D X v M ( v, u ) ≤ ℓ . By the Chernoff bound (3.3), we havePr { d H ( u ) ≥ ℓ } ≤ (cid:26) X u ≥ ℓ (cid:27) ≤ (cid:18) (cid:19) ℓ/ , u / ∈ B cℓ .
30t follows that E | A ℓ \ B cℓ | = ne − Ω( ℓ ) . Since R ∈ RM, there are constants b, b > α ∈ [0 , /
2) such that t | B t | b n ≤ σ ( B t ) ≤ b (cid:18) | B t | n (cid:19) − α . (8.1)If α > | B t | ≤ b t − − α n for some b >
0. Then E | b N ( S θ ) | ≤ θ | B θ | + θ | B cθ | + θ E | A θ | + X ℓ ≥ θ | B cℓ | + E | A ℓ \ B cℓ |≤ b θ α − α + b ( cθ ) α − α + θe − Ω( θ ) + X ℓ ≥ θ b ( cℓ ) − α + e − Ω( ℓ ) n. For θ tending to infinity, Markov’s inequalty shows that | b N ( S θ ) | = o ( n )whp. If α = 0 then (8.1) gives | B t | = 0 for any t tending to infinity, and weagain conclude that | b N ( S θ ) | = o ( n ) whp. H Note that the distribution of H is unaffected by scaling R , and we mayassume that R is scaled so that γ ( R ) = 1, and in particular 1 − ε ≤ τ k ≤ ε whp for any ε ≫ ln ln n ln n , by Lemma 3.2. Let S be the set of vertices u withdegree less than D in G n,R (1 − ε ). For A ⊆ V , let A = A ∩ S and A = A \ S ,and note that | N H ( A ) | = | N H ( A ) \ A | + | N H ( A ) \ b N H ( A ) |≥ | N H ( A ) | + | N H ( A ) | − | A | − e H ( A , b N ( S )) , We proceed in three parts. Firstly, we show that whp no vertex in H hastwo neighbours in S , and that S contains no edges, which implies that | N H ( A ) | ≥ k | A | since H has minimum degree at least k . Secondly, wenote that N H ( A ) = N H ( ∞ ) ( A ) since A ∩ S = ∅ , and show that whp | N H ( ∞ ) ( A ) | ≥ D | A | for all | A | ≤ βn , if D is large enough. Lastly, we showthat whp e H ( u, b N ( S )) ≤ D for all u , if D is large enough. We conclude thatif D is large enough then whp, for all | A | ≤ βn , | N H ( A ) | ≥ k | A | + D | A | − | A | − D | A | ≥ k | A | . .3.1 Part 1 Let t = 1 − ε . For an edge set F , let S F ( t ) ⊆ S ( t ) be the set of vertices u with degree less than D , not counting the edges in F . Letting T denote theevent that t ≤ τ k ≤
2, we have { u, v ∈ S ( t ) } ∩ { F ⊆ H } ∩ T ⊆ { u, v ∈ S F ( t ) } ∩ { F ⊆ G n,R (2) } . The two events in the right-hand side are independent, and we first useLemma 3.1 to boundPr { e H ( S ( t )) > } ≤ Pr (cid:8) T (cid:9) + X u,v Pr (cid:8) u, v ∈ S { uv } ( t ) (cid:9) Pr { E ( u, v ) ≤ }≤ o (1) + 2 k R k X u,v p ε ( u ) p ε ( v ) , (8.2)for some p ε ( u ) = e − (1 − ε ) d R ( u )+ O (ln d R ( u )) . Likewise, the probability that some w has two neighbours in S is bounded by4 X u,v,w p ε ( u ) p ε ( v ) R ( u, w ) R ( v, w ) ≤ k R k X u,v p ε ( u ) p ε ( v ) d R ( v ) ≤ k R k X u,v p ε ( u ) p ε ( v ) , (8.3)where 4 d R ( v ) is absorbed into the error term of p ε ( v ). We bound P u p ε ( u ).Recall that γ ( R ) = P u e − d R ( u ) = 1 by choice of scaling. Let U be the setof u with d R ( u ) ≤ n . Then, as p ε ( u ) ≤ e − (1 − ε ) d R ( u ) , X u p ε ( u ) ≤ e ε ln n X u ∈ U e − d R ( u ) + | U | e − (1 − ε )2 ln n ≤ n ε . We have k R k n ε = o (1), and conclude that both (8.2) and (8.3) are o (1). Let m ≥ m -out graph G R, m .Let N ( u ) be the 2 m vertices chosen by u , independent for all u . Fix someset A ⊆ V with | A | = a ≤ βn , let κ = (( m + 1) a/n ) c with c > κ can be made smaller than any positiveconstant by letting β be small enough, and we choose β sufficiently small toallow the Chernoff bounds below.Consider the following procedure. Initially set B = A . For 1 ≤ i ≤ a/ u i ∈ A \ { u , . . . , u i − } be such that M ( u, B i − ) < κ .32ote that this is possible for i ≤ a/ A ⊆ B i − and | B i − | ≤ ( m + 1) a . Reveal vertices of N ( u i ) until (a) at least m verticesnot in B i − have been found, in which case we add those m vertices to B i − to form B i , or (b) all of N ( u i ) has been revealed. Let X i = 1 if (a) occursand 0 otherwise.When a vertex of N ( u i ) is revealed it has probability at most 2 κ ofbeing in B i (with the factor 2 accounting for the choices already made). So,conditional on the procedure so far, the probability that X i = 0 is at mostPr { Bin (2 m, κ ) ≥ m } ≤ (cid:18) κmm (cid:19) m/ , by the Chernoff bound (3.3). With p = (4 κ ) m/ we then havePr n | N ( A ) | < m | A | o ≤ Pr a/ X i =1 X i < a ≤ Pr (cid:26) Bin (cid:18) a , p (cid:19) ≥ a (cid:27) . Again applying (3.3), we obtainPr n ∃| A | ≤ βn : | N ( A ) | < m | A | o ≤ X a ≤ βn (cid:18) na (cid:19) (cid:18) ap/ a/ (cid:19) a/ ≤ X a ≤ βn (cid:16) nea (3 p ) / (cid:17) a . We have p / = f ( m )( a/n ) cm/ for some function f ( m ). Choosing m > /c , and β small enough, we conclude that this sum is o (1). So G R, m ∈E m/ whp, where E m/ = n | N ( A ) | ≥ m | A | for all | A | ≤ βn o . Let D = 4 m . Condition on the whp events H + D/ ∈ M m/ and H − D/ ∈M m/ , and let | A | ≤ βn . Note that for each u , N H ( ∞ ) ( u ) contains at leastone of N + ( u ) and N − ( u ). Let A + be the set of u with N + ( u ) ⊆ N H ( ∞ ) ( u ),and suppose without loss of generality that | A + | ≥ | A | /
2. Then | N H ( ∞ ) ( A ) | ≥ | N + ( A + ) | ≥ m | A + | ≥ D | A | . For D large enough, we conclude that H ( ∞ ) ∈ E D/ whp.33 .3.3 Part 3 Let t = 1 + ε . Fix u ∈ V and let S u ⊇ S be the set of vertices v with degreeless than D in G n,R ( t ), not counting edges incident to u . Let X ( u ) = | N t ( u ) ∩ b N t ( S u ) | . If τ k ≤ t we then have e H ( u, b N H ( S )) ≤ X ( u ) since H ⊆ G n,R ( t ).Note that N t ( u ) and b N t ( S u ) are independent. Conditional on E ( v, w ) forall v, w = u , the expected value of X ( u ) is tR ( u, b N t ( S u )) ≤ k R k| N t ( S u ) | ≤ D k R k| S u | . Let φ be such that k R k φ/ ≤ n − . The Chernoff bound (3.3) then givesPr n X ( u ) ≥ φ (cid:12)(cid:12)(cid:12) | S u | = n o (1) o ≤ (cid:18) E [ X ] φ (cid:19) φ/ = O (cid:16) ( k R k| N t ( S u ) | ) φ/ (cid:17) = o ( n − ) . Since τ k ≤ ε whp by Lemma 3.2, we conclude that e H ( u, b N H ( S )) < φ for all u whp. Proof of Lemma 3.1.
Suppose X = X + · · · + X n where the X i are indepen-dent indicator random variables with E [ X i ] = 1 − e − µ i and µ = µ + · · · + µ n ,where µ i /µ ≤ ε = o (1) for all i , and µ tends to infinity with n . It is nothard to show that Pr { X ≤ ℓ } = e − µ µ ℓ ℓ ! (cid:18) O (cid:18) εµ (cid:19)(cid:19) , (9.1)Define Po( µ, ℓ ) = e − µ µ ℓ /ℓ !.Let R ∈ RM(1) and let t = Ω(1). For any vertex u and any set S ⊆ V with | V \ S | = O (1), e t ( u, S ) satisfies the above with E [ e t ( u, S )] = tR ( u, S ) = td R ( u ) − o (1) . With d R ( u ) ≥ d tending to infinity we then havePr { e t ( u, S ) ≤ ℓ } = (1 + o (1))Po( td R ( u ) , ℓ ) = e − td R ( u )+ O (ln d R ( u )) . roof of Lemma 3.2. Let P ∈ RM be a matrix with γ k ( P ) → γ k ∈ (0 , ∞ ).Note that d = min d P ( u ) = Θ(ln n ) tends to infinity with n . Let U be a setof ℓ distinct vertices, let k >
0, and let 0 < k u ≤ k for each u ∈ U , with P u ∈ U ( k − k u ) = 2 m for some m ≥
0. Consider the graph G n,P . By (9.1),Pr (cid:8) e ( u, U ) < k u , all u ∈ U (cid:9) = Y u ∈ U Pr (cid:8) e ( u, U ) < k u (cid:9) = (1 + o (1)) Y u ∈ U Po( d P ( u ) , k u −
1) (9.2) ≤ o (1) d m Y u ∈ U Po( d P ( u ) , k − . (9.3)Let E m be the event that U contains exactly m edges. Then E m is indepen-dent of { e ( u, U ) : u ∈ U } , and Pr {E m } = O ( k P k m ) and Pr {E } = 1 − o (1).For k > { d ( u ) < k ∀ u ∈ U } = X m Pr {E m } X P k u = kℓ − m Pr (cid:8) e ( u, U ) < k u ∀ u ∈ U (cid:9) ≤ Y u ∈ U Po( d P ( u ) , k − ! Pr {E } + X m> O (cid:18) k P k m d m (cid:19)! = (1 + o (1)) Y u ∈ U Po( d P ( u ) , k − . Letting X k denote the number of vertices in G n,P with d ( u ) < k , E (cid:20)(cid:18) X k ℓ (cid:19)(cid:21) = X | U | = ℓ Pr { d ( u ) < k, all u ∈ U } = X | U | = ℓ (1 + o (1)) Y u ∈ U Po( d P ( u ) , k −
1) = (1 + o (1)) γ k ( P ) ℓ ℓ ! . If γ k ( P ) converges to some γ k < ∞ , the method of moments (see e.g. [7])implies that X k converges to a Poisson random variable with expected value γ k , and lim n →∞ Pr { δ ( G n,P ) ≥ k } = lim n →∞ Pr { X k = 0 } = e − γ k . If γ k ( P ) diverges to infinity, we note that Var ( X k ) = o ( E [ X k ]), and Cheby-shev’s inequality implies that Pr { X k > } → τ k , let ε ≫ ln ln n ln n and suppose γ ( R ) = 1. Notethat G n,R (1 + ε ) d = G n,P with P ( u, v ) = 1 − e − (1+ ε ) R ( u,v ) . This matrix has d P ( u ) + O (ln d P ( u )) ≥ (1 + ε/ d R ( u ) for all u , where we use the fact that k R k = o ( ε ). We have γ k ( P ) = X u e − d P ( u ) d P ( u ) k − ≤ X u e − (1+ ε/ d R ( u ) ≤ e − εd/ γ ( R ) = o (1) . We conclude that δ ( G n,R (1 + ε )) ≥ k whp. By the same token, δ ( G n,R (1 − ε )) < k whp. Proof of Lemma 3.7. If ℓ = 1, take S = I and T = J . We prove the case ℓ > π I ( I ) = 1 and π J ( J ) = 1.For each j ∈ J let I ℓ ( j ) be the set of i ∈ I with τ ( i, j ) = a ℓ . Let J ′ = (cid:26) j ∈ J : π I ( I ℓ ( j )) < ℓ (cid:27) . If π J ( J ′ ) < − ℓ − , let S = ∩ j / ∈ J ′ I ℓ ( j ) and T = J \ J ′ . Then τ = a ℓ on S × T and π I ( S ) ≥ ℓ − , π J ( T ) ≥ ℓ − .If π J ( J ′ ) ≥ − ℓ − , let I ′ = ∩ j ∈ J ′ ( I \ I ℓ ( j )) and consider the matrix τ ′ ( i, j ) = τ ( i, j ) , i ∈ I ′ , j ∈ J ′ . This takes values { a , . . . , a ℓ − } , and by induction there exist S ⊆ I ′ , T ⊆ J ′ with π I ( S ) ≥ ( ℓ − − π I ( I ′ ) and π J ( T ) ≥ ( ℓ − − π J ( J ′ ) such that τ ′ , andtherefore τ , is constant on S × T . We have π I ( I ′ ) ≥ min j ∈ J ′ π I ( I \ I ℓ ( j )) ≥ − ℓ − , π J ( J ′ ) ≥ − ℓ − , so π I ( S ) ≥ ( ℓ − − π jI ( I ′ ) ≥ ℓ − ,π J ( T ) ≥ ( ℓ − − π J ( J ′ ) ≥ ℓ − . .3 Mixing in simple random walks Proof of Lemma 3.8.
This proof is more or less taken from [16], with slightmodifications. We first note that for R ∈ RM, the transition matrix M isreversible: σ ( u ) M ( u, v ) = d R ( u ) d R ( V ) R ( u, v ) d R ( u ) = d R ( v ) d R ( V ) R ( v, u ) d R ( v ) = σ ( v ) M ( v, u ) . For vectors f, g : V → R we define an inner product h f, g i σ = X v f ( v ) g ( v ) σ ( v ) , and the associated norm k f k σ = h f, f i / σ . Then for probability measures π , (cid:13)(cid:13)(cid:13)(cid:13) π ( · ) σ ( · ) − (cid:13)(cid:13)(cid:13)(cid:13) σ = vuut X v π ( v ) σ ( v ) ! − µ σ ( π ) , where = (1 , , . . . , λ ≥ λ ≥ · · · ≥ λ n ≥ − M , with acorresponding eigenbasis = f , . . . , f n , orthonormal with respect to the h· , ·i σ inner product. Then (see e.g. [16]) πM ( v ) σ ( v ) − n X j =2 X u π ( u ) f j ( u ) f j ( v ) λ j = n X j =2 λ j f j ( v ) X u (cid:20)(cid:18) π ( u ) σ ( u ) − (cid:19) f j ( u ) σ ( u ) + f j ( u ) σ ( u ) (cid:21) = n X j =2 λ j f j ( v ) hD πσ − , f j E σ + h f j , i σ i . For any j >
1, orthonormality implies h f j , i σ = 0. Let F ( j ) = h π/σ − , f j i σ , and note that µ σ ( π ) = P nj =1 F ( j ) . Then µ σ ( πM ) = X v σ ( v ) n X j =2 λ j f j ( v ) F ( j ) = X j ≥ λ j F ( j ) k f j k σ + 2 X k>j ≥ λ j λ k F ( j ) F ( k ) h f j , f k i σ f j are orthonormal in the h· , ·i σ inner product, we are left with µ σ ( πM ) = X j ≥ λ j F ( j ) ≤ λ n X j =1 F ( j ) = λ µ σ ( π ) . Proof of Lemma 3.4.
For A ⊆ V let A : V → { , } be the indicator for A .Let π ( u ) = A / | A | . One easily checks that1 | A | M ( A, B ) − σ ( B ) = (cid:28) πM ( · ) σ ( · ) − V , B (cid:29) σ . Cauchy-Schwarz’ inequality and Lemma 3.8 then give (cid:12)(cid:12)(cid:12)(cid:12) | A | M ( A, B ) − σ ( B ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:13)(cid:13)(cid:13)(cid:13) πMσ − V (cid:13)(cid:13)(cid:13)(cid:13) σ k B k σ ≤ λµ σ ( π ) σ ( B ) . Since µ σ ( π ) ≤ P u ∈ A π ( u ) /σ ( u ) ≤ bn/ | A | , (3.1) follows.To see how (i) follows, note that R ( u, v ) ≥ dM ( u, v ) for all u, v . Lemma 3.3gives σ ( B ) = Ω(1) whenever | B | = Ω( n ). Since λ ( R ) = o (1), for | A | , | B | =Ω( n ) we then have R ( A, B ) ≥ dM ( A, B ) ≥ d ( | A | σ ( B ) − O ( λ p n | A | σ ( B )) = Ω( dn ) . For (ii), let c > | A | ≤ n/ A ′ as the set of u ∈ A with M ( u, A ) ≥ ( | A | /n ) c . If | A | < ( n c k M k ) − / (1 − c ) then any u ∈ A has M ( u, A ) ≤ k M k| A | < (cid:18) | A | n (cid:19) c , so A ′ = ∅ . Suppose ( n c k M k ) − / (1 − c ) ≤ | A | ≤ n/
2. Then (3.1) and thepower law condition (1.1) give, for some constant 0 ≤ α < / (cid:18) | A | n (cid:19) c ≤ | A ′ | M ( A ′ , A ) ≤ σ ( A ) + λ s bnσ ( A ) | A ′ |≤ "(cid:18) | A | n (cid:19) − α − c + λ s b | A || A ′ | (cid:18) n | A | (cid:19) α +2 c | A | n (cid:19) c . (9.4)38or 2 c < − α , the first term in square brackets is at most 1. Since λ ≤ ( n k M k ) − α − γ for some constant γ >
0, we have for | A | ≥ ( n c k M k ) − / (1 − c ) that (cid:18) n | A | (cid:19) α +2 c ≤ nn − c − c k M k − − c ! α +2 c = ( n k M k ) α +2 c − c . We have n k M k ≥ M is a transition matrix. Since λ = o (1) and λ ≤ ( n k M k ) − α − γ for some constant γ >
0, we conclude that for c > λ ( n/ | A | ) α +2 c = o (1). From (9.4) we then have, for | A | ≤ n/ < c ≤ (cid:18) n | A | (cid:19) c ≤ o s b | A || A ′ | ! . We conclude that | A ′ | = o ( | A | ). References [1] M. Ajtai, J. Koml´os, and E. Szemer´edi. First occurrence of Hamiltoncycles in random graphs.
Annals of Discrete Mathematics , 115:173–178,1985.[2] N. Alon and F.R.K. Chung. Explicit construction of linear sized tolerantnetworks.
Discrete Mathematics , 72(1):15 – 19, 1988.[3] Yahav Alon and Michael Krivelevich. Hitting time of edge disjointHamilton cycles in random subgraph processes on dense base graphs.
ArXiv e-prints , 2020.[4] Michael Anastos, Alan Frieze, and Pu Gao. Hamiltonicity of randomgraphs in the stochastic block model.
ArXiv e-prints , 2019.[5] B. Bollob´as. The evolution of random graphs.
Transactions of theAmerican Mathematical Society , 286(1):257–274, 1984.[6] B. Bollob´as and A. M. Frieze. On matchings and Hamiltonian cyclesin random graphs.
Annals of Discrete Mathematics , 28:23–46, 1985.[7] Rick Durrett.
Probability: Theory and Examples . Cambridge Series inStatistical and Probabilistic Mathematics. Cambridge University Press,4 edition, 2010. 398] P. Erd˝os and A R´enyi. On the evolution of random graphs.
Publ. Math.Inst. Hungar. Acad. Sci. , 5:17–61, 1960.[9] A. M. Frieze. Limit distribution for the existence of Hamiltonian cyclesin random bipartite graphs.
Europ. J. Combinatorics , 6:327–334, 1985.[10] A. M. Frieze. Hamilton cycles in random graphs: a bibliography.
ArXive-prints , 1901.07139 [v13], July 2019.[11] A. M. Frieze and T. Johansson. On random k -out subgraphs of largegraphs. Random Structures & Algorithms , 50(2):143–157, 2017.[12] A. M. Frieze and M. Karo´nski.
Introduction to Random Graphs . Cam-bridge University Press, Cambridge, UK, 2015.[13] Alan Frieze and Michael Krivelevich. Hamilton cycles in random sub-graphs of pseudo-random graphs.
Discrete Mathematics , 256:137–150,September 2002.[14] T. Johansson. On Hamilton cycles in Erd˝os-R´enyi subgraphs of largegraphs.
Random Structures & Algorithms , 57:132–149, 2020.[15] J. Koml´os and E. Szemer´edi. Limit distribution for the existence ofHamiltonian cycles in a random graph.
Discrete Mathematics , 43(1):55–63, 1983.[16] David A Levin and Yuval Peres.
Markov chains and mixing times ,volume 107. American Mathematical Soc., 2017.[17] Richard Montgomery. Hamiltonicity in random graphs is born resilient.
Journal of Combinatorial Theory, Series B , 139:316 – 341, 2019.[18] A. Nilli. On the second eigenvalue of a graph.