Loose Hamilton Cycles in Regular Hypergraphs
Andrzej Dudek, Alan Frieze, Andrzej Ruciński, Matas Šileikis
LLoose Hamilton Cycles in Regular Hypergraphs ∗ ANDRZEJ DUDEK † ALAN FRIEZE ‡ ANDRZEJ RUCI ´NSKI § MATAS ˇSILEIKIS ¶ Department of Mathematics, Western Michigan University, Kalamazoo, MI [email protected] Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA [email protected] Department of Discrete Mathematics, Adam Mickiewicz University, Pozna´n, Poland [email protected] Department of Mathematics, Uppsala University, Sweden [email protected]
Abstract
We establish a relation between two uniform models of random k -graphs (forconstant k ≥
3) on n labeled vertices: H ( k ) ( n, m ), the random k -graph withexactly m edges, and H ( k ) ( n, d ), the random d -regular k -graph. By extendingto k -graphs the switching technique of McKay and Wormald, we show that, forsome range of d = d ( n ) and a constant c >
0, if m ∼ cnd , then one can couple H ( k ) ( n, m ) and H ( k ) ( n, d ) so that the latter contains the former with proba-bility tending to one as n → ∞ . In view of known results on the existence of aloose Hamilton cycle in H ( k ) ( n, m ), we conclude that H ( k ) ( n, d ) contains a looseHamilton cycle when d (cid:29) log n (or just d ≥ C log n , if k = 3) and d = o ( n / ). A k -uniform hypergraph (or k -graph for short) on a vertex set V = { , . . . , n } is afamily of k -element subsets of V . A k -graph H = ( V, E ) is d -regular, if the degree of ∗ MSC2010 codes: 05C65, 05C80, 05C45. † Supported in part by Simons Foundation Grant ‡ Supported in part by NSF grant CCF1013110. § Research supported by the Polish NSC grant N201 604940. Part of research performed at EmoryUniversity, Atlanta. ¶ Research supported by the Polish NSC grant N201 604940. Part of research performed at AdamMickiewicz University, Pozna´n. a r X i v : . [ m a t h . C O ] A p r very vertex is d : deg( v ) := | { e ∈ E : v ∈ e } | = d, v = 1 , . . . , n. Let H ( k ) ( n, d ) be the family of all such graphs. Further we tacitly assume that k divides nd . By H ( k ) ( n, d ) we denote the regular random graph, which is chosenuniformly at random from H ( k ) ( n, d ). Let M := nd/k stand for the number of edges of H ( k ) ( n, d ).Let us recall two more standard models of random k -graphs on n vertices. For p ∈ [0 , binomial random k -graph H ( k ) ( n, p ) is a random k -graph obtained byincluding every of the (cid:0) nk (cid:1) possible edges with probability p independently of others.For integer m ∈ [0 , (cid:0) nk (cid:1) ], the uniform random graph H ( k ) ( n, m ) is chosen uniformly atrandom among k -graphs with precisely m edges.We study the behavior of random k -graphs as n → ∞ . Parameters d, m, p aretreated as functions of n . We use the asymptotic notation O ( · ) , o ( · ) , Θ( · ) , ∼ (as it isdefined in, say, [15]), with respect to n tending to infinity and assume that impliedconstants may depend on k . Given a sequence of events ( A n ), we say that A n happens asymptotically almost surely ( a.a.s. ) if P ( A n ) →
1, as n → ∞ .The main result of the paper is that we can couple H ( k ) ( n, d ) and H ( k ) ( n, m ) sothat the latter is a subgraph of the former a.a.s. Theorem 1.
For every k ≥ , there are positive constants c and C such that if d ≥ C log n , d = o ( n / ) and m = (cid:98) cM (cid:99) = (cid:98) cnd/k (cid:99) , then one can define a jointdistribution of random graphs H ( k ) ( n, d ) and H ( k ) ( n, m ) in such a way that H ( k ) ( n, m ) ⊂ H ( k ) ( n, d ) a.a.s. To prove Theorem 1, we consider a generalization of a k -graph that allows loopsand multiple edges. By a k -multigraph on the vertex set [ n ] we mean a multiset of k -element multisubsets of [ n ]. An edge is called a loop if it contains more than onecopy of some vertex and otherwise it is called a proper edge .The idea of the proof and the structure of the paper are as follows. In Section 2 wegenerate two models of random k -multigraphs by drawing random sequences from [ n ]and cutting them into consecutive segments of length k . By accepting an edge onlyif it is not a loop and does not coincide with a previously accepted edge, after m successful trials we obtain H ( k ) ( n, m ). On the other hand, by allowing d copies of eachvertex, and accepting every edge, after dn/k steps we obtain a d -regular k -multigraph H ( k ) ∗ ( n, d ). Then we show that H ( k ) ∗ ( n, d ) a.a.s. has no multiple edges and relativelyfew loops. In Section 3 we couple the two random processes in such a way that H ( k ) ( n, m ) is a.a.s. contained in an initial segment of H ( k ) ∗ ( n, d ), which we call red .In Section 4 we eliminate at once all red loops of H ( k ) ∗ ( n, d ) by swapping them with2andomly selected non-red ( green ) proper edges. Finally, in Section 5, we eliminatethe green loops one by one using a certain random procedure (called switching) whichdoes not destroy the previously embedded copy of H ( k ) ( n, m ) and, at the same time,transforms H ( k ) ∗ ( n, d ) into a k -graph ˜ H ( k ) ( n, d ), which is distributed approximately as H ( k ) ( n, d ), that is, almost uniformly. Theorem 1 follows by a (maximal) coupling of˜ H ( k ) ( n, d ) and H ( k ) ( n, d ).A consequence of Theorem 1 is that H ( k ) ( n, d ) inherits from H ( k ) ( n, m ) propertiesthat are increasing, that is to say, properties that are preserved as new edges areadded. An example of such a property is hamiltonicity, that is, containment of aHamilton cycle.A loose Hamilton cycle on n vertices is a set of edges e , . . . , e l such that forsome cyclic order of the vertices every edge e i consists of k consecutive vertices, and | e i ∩ e i +1 | = 1 for every i ∈ [ l ], where e l +1 := e . A necessary condition for theexistence of a loose Hamilton cycle on n vertices is ( k − | n , which we will assumewhenever relevant.The history of hamiltonicity of regular graphs is rich and exciting (see [21]). How-ever, we state only the final results here. Asymptotic hamiltonicity was proved byRobinson and Wormald [20] in 1994 for any fixed d ≥
3, by Krivelevich, Sudakov,Vu and Wormald [16] in 2001 for d ≥ n / log n , and by Cooper, Frieze and Reed [7]in 2002 for C ≤ d ≤ n/C and some large constant C .The threshold for existence of a loose Hamilton cycle in H ( k ) ( n, p ) was determinedby Frieze [12] (for k = 3) as well as Dudek and Frieze [9] (for k ≥
4) under adivisibility condition 2( k − | n , which was relaxed to ( k − | n by Dudek, Frieze, Lohand Speiss [10].However, we formulate these results for the model H ( k ) ( n, m ), such a possibility be-ing provided to us by the asymptotic equivalence of models H ( k ) ( n, p ) and H ( k ) ( n, m )(see, e.g., Corollary 1.16 in [13]). Theorem 2 ([12], [10]) . There is a constant
C > such that if m ≥ Cn log n , then H (3) ( n, m ) contains a loose Hamilton cycle a.a.s. Theorem 3 ([9], [10]) . Let k ≥ . If n log n = o ( m ) , then H ( k ) ( n, m ) contains a loose Hamilton cycle a.a.s. Theorems 1, 2, and 3 immediately imply the following fact.
Corollary 4.
There is a constant
C > such that if C log n ≤ d = o ( n / ) , then H (3) ( n, d ) contains a loose Hamilton cycle a.a.s.For every k ≥ if log n = o ( d ) and d = o ( n / ) , then H ( k ) ( n, d ) contains a loose Hamilton cycle a.a.s. Preliminaries
We say that a k -multigraph is simple if it is a k -graph, that is, if it contains neithermultiple edges nor loops.Given a sequence x ∈ [ n ] ks , s ∈ { , , . . . } , let H ( x ) stand for a k -multigraphwith s edges x ki +1 . . . x ki + k , i = 0 , . . . , s − . In what follows it will be convenient to work directly with the sequence x rather thanwith the k -multigraph H ( x ). Recycling the notation, we still refer to the k -tuples of x which correspond to the edges, loops, and proper edges of H ( x ) as edges , loops ,and proper edges of x , respectively. We say that x contains multiple edges , if H ( x )contains multiple edges, that is, some two edges of x are identical as multisets. By λ ( x ) we denote the number of loops in x .Let X = ( X , . . . , X nd ) be a sequence of i.i.d. random variables, each distributeduniformly over [ n ]: P ( X i = j ) = 1 n , ≤ i ≤ nd, ≤ j ≤ n. Set L := n / d / . Proposition 5. If d → ∞ , and d = o ( n / ) , then a.a.s. X has no multiple edges and λ ( X ) ≤ L .Proof. Both statements hold a.a.s. by Markov’s inequality, because the expected num-ber of pairs of multiple edges in X is at most (cid:18) M (cid:19) k ! n k = O ( d n − k ) = o (1);and the expected number of loops in X is E λ ( X ) ≤ M (cid:18) k (cid:19) n − = O ( d ) = o ( n / d / ) . Let
S ⊂ [ n ] nd be the family of all sequences in which every value i ∈ [ n ] occursprecisely d times. Let Y = ( Y , . . . , Y nd ) be a sequence choosen from S uniformly atrandom. One can equivalently define Y as a discrete time process determined by theconditional probabilities P ( Y t +1 = v | Y , . . . , Y t ) = d − deg t ( v ) nd − t , v = 1 , . . . , n, t = 0 , . . . , nd − , (1)4here deg t ( v ) := | { ≤ s ≤ t : Y s = v } | . Assuming k | ( nd ), we define a random d -regular k -multigraph H ( k ) ∗ ( n, d ) := H ( Y ) . Note that for every H ∈ H ( k ) ( n, d ) the number of sequences giving H is the same,namely, M !( k !) M . Therefore H ( k ) ( n, d ) can be obtained from H ( k ) ∗ ( n, d ) by condition-ing on simplicity.Probably a more popular way to define H ( k ) ∗ ( n, d ) is via the so called configurationmodel , which, for k = 2, first appeared implicitly in Bender and Canfield [2] and wasgiven in its explicit form by Bollob´as [3] (its generalization to every k is straight-forward). A configuration is a partition of the set [ n ] × [ d ] into M sets of size k ,say, P , . . . , P M . Then H ( k ) ∗ ( n, d ) is obtained by taking a configuration uniformly atrandom and mapping every set P i = { ( v , w ) , . . . , ( v k , w k ) } to an edge v . . . v k .The idea of obtaining H ( k ) ∗ ( n, d ) from a random sequence for k = 2 was usedindependently Bollob´as and Frieze [5] and Chv´atal [6].What makes studying d -regular k -graphs a bit easier than graphs, at least forsmall d , is that a.a.s. Y has no multiple edges. However, they usually have a fewloops, but, as it turns out, not too many. Throughout the paper, for r = 0 , , . . . and x ∈ R , we use the standard notation ( x ) r := x ( x − . . . ( x − r + 1). Recall that L = n / d / . Proposition 6. If d → ∞ , and d = o ( n / ) , then each of the following statementsholds a.a.s.:(i) Y has no multiple edges,(ii) Y has no loop with a vertex of multiplicity at least 3,(iii) Y has no loop with two vertices of multiplicity at least 2,(iv) λ ( Y ) ≤ L .Proof. The first three statements hold because the expected number of undesiredobjects tends to zero.(i) The expected number of pairs of multiple edges in Y is (cid:18) M (cid:19) (cid:88) k + ... + k n = k (cid:0) kk ,...,k n (cid:1) (cid:0) nd − kd − k ,...,d − k n (cid:1)(cid:0) ndd,...,d (cid:1) ≤ n d n k k ! d k ( nd ) k = O (cid:0) n − k d (cid:1) = o (1) . (ii) The expected number of loops in Y having a vertex of multiplicity at least 3is at most M (cid:0) k (cid:1) n (cid:0) nd − d − ,d,...,d (cid:1)(cid:0) ndd,...,d (cid:1) ≤ nd k nd ( nd ) = O ( n − d ) = o (1) . Y having at least two vertices ofmultiplicity at least 2 is at most M (cid:0) k (cid:1)(cid:0) k − (cid:1) n (cid:0) nd − d − ,d − ,d,...,d (cid:1)(cid:0) ndd,...,d (cid:1) ≤ nd k n d ( nd ) = O ( n − d ) = o (1) . The statement (iv) follows by Markov’s inequality, because E λ ( Y ) ≤ M (cid:0) k (cid:1) n (cid:0) nd − d − ,d,...,d (cid:1)(cid:0) ndd,...,d (cid:1) ≤ nd k nd ( nd ) = O ( d ) = o ( n / d / ) . In a couple of forthcoming proofs we will need the following concentration inequal-ity (see, e.g., McDiarmid [17, § S N be the set of permutations of [ N ] andlet Z be distributed uniformly over S N . Suppose that function f : S N → R satisfiesa Lipschitz property, that is, for some b > | f ( z ) − f ( z (cid:48) ) | ≤ b, whenever z (cid:48) can be obtained from z by swapping two elements. Then P ( | f ( Z ) − E f ( Z ) | ≥ t ) ≤ − t /b N , t ≥ . (2)We set r := 2 k + 1 and c := 1 / (2 r + 1). For the rest of the paper let m := (cid:98) cM (cid:99) . Color the first rm edges of Y red and the remaining M − rm edges green . Define Y red = ( Y , . . . , Y krm ) and Y green = ( Y krm +1 , . . . , Y nd ). Consider a function ϕ : S → Z defined by ϕ ( y ) := n (cid:88) v =1 (deg green ( y ; v )) , where deg green ( y ; v ) := | { i ∈ [ rkm + 1 , kM ] : y i = v } | is the green degree of v . It canbe easily checked that E ϕ ( Y ) = n ( d ) ( kM − rkm ) ( kM ) = Θ (cid:0) nd (cid:1) . (3)Suppose that sequences y , z ∈ S can be obtained from each other by swapping twocoordinates. Since such a swapping affects the green degree of at most two verticesand for every such vertex the green degree changes by at most one, we get | ϕ ( y ) − ϕ ( z ) | ≤ ≤ r ≤ d { ( r ) − ( r − } = 2 (( d ) − ( d − ) < d. Thus, treating Y as a permutation of nd elements, (2) implies P ( | ϕ ( Y ) − E ϕ ( Y ) | ≥ x ) ≤ (cid:26) − x nd (cid:27) , x > . (4)6 Embedding H ( k ) ( n, m ) into H ( k ) ∗ ( n, d ) A crucial step toward the embedding is to couple the processes ( X t ) and ( Y t ), t =1 , . . . , nd , in such a way that a.a.s. X and Y have many edges in common. For this,let I , . . . , I nd be an i.i.d. sequence of symmetric Bernoulli variables independent of X : P ( I t = 0) = P ( I t = 1) = 1 / , t = 1 , . . . , nd. We define Y , Y , . . . inductively. Fix t ≥
0. Suppose that we have already revealedthe values Y , . . . , Y t . If2 d − deg t ( v ) nd − t − n ≥ v ∈ [ n ] , (5)then generate an auxiliary random variable Z t +1 independently of I t +1 according tothe following distribution (note that the left-hand side of (5) sums over v ∈ [ n ] to 1) P ( Z t +1 = v | Y , . . . , Y t ) = 2 d − deg t ( v ) nd − t − n , v = 1 , . . . , n. If (5) holds, set Y t +1 = I t +1 X t +1 + (1 − I t +1 ) Z t +1 . Otherwise generate Y t +1 directlyaccording to the conditional probabilities (1). The distribution of Z t +1 is chosenprecisely in such a way that (1) holds for any values of variables Y , . . . , Y t , regardlessof whether (5) is satisfied or not. This guarantees that Y = ( Y , . . . , Y nd ) is actuallyuniformly distributed over S .The following lemma states that we can embed H ( k ) ( n, m ) in the red subgraph of H ( k ) ∗ ( n, d ). Lemma 7.
For every k ≥ , there is a constant C > such that if d ≥ C log n and d = o ( n / ) , then one can define a joint distribution of H ( k ) ( n, m ) and Y in such away that H ( k ) ( n, m ) ⊂ H ( Y red ) a.a.s.Proof. Let W = { ≤ i ≤ rm − I ki +1 = · · · = I ki + k = 1 } and let X (cid:48) be the subsequence of X formed by concatenation of the edges ( X ki +1 , . . . , X ki + k ), i ∈ W . Define the events A = { X has no multiple edges , λ ( X ) ≤ L, | W | ≥ m + L } , B = { inequality (5) holds for every v ∈ [ n ] and t < krm } . Suppose that A holds. Then all edges of X (cid:48) are distinct and at least m of them areproper. By symmetry, we can take, say, the first m of these edges to form H ( k ) ( n, m ).If A fails, we simply generate H ( k ) ( n, m ) independently of everything else.7urther, if B holds, then for every i ∈ W we have( Y ki +1 , . . . , Y ki + k ) = ( X ki +1 , . . . , X ki + k ) , which is to say that H ( X (cid:48) ) is a subgraph of H ( Y red ). Consequently, P (cid:0) H ( k ) ( n, m ) ⊂ H ( Y red ) (cid:1) ≥ P ( A ∩ B ) , so it is enough to show that each of the events A and B holds a.a.s.By Proposition 5, the first two conditions defining A hold a.a.s. As for the lastone, note that | W | ∼ Bi( rm, − k ), therefore E | W | = (1+2 − k ) m and Var | W | = O ( m ).Since L = o ( m ), Chebyshev’s inequality implies that for n large enough P ( | W | < m + L ) ≤ Var | W | (2 − k m − L ) = O ( m − ) = o (1) . Concerning the event B , if for some t < krm and some v ∈ [ n ] inequality (5) does nothold, then deg t ( v ) > d/
2, and consequently deg krm ( v ) > d/
2. Note that deg krm ( v ), v = 1 , . . . , n , are identically distributed hypergeometric random variables. Let X :=deg krm (1). The probability that B fails is thus at most P (deg krm ( v ) > d/ v ∈ [ n ]) ≤ n P ( X > d/ . We have E X = krm/n ≤ rcd . Since c < / r , applying, say, Theorem 2.10 from [13],we obtain P ( X > d/ ≤ exp {− ad } ≤ exp {− aC log n } , for some positive constant a . Choosing C > a − we get n P ( X > d/
2) = o (1), thusconcluding the proof. Let E be the family of sequences in S with no multiple edges and containing atmost L loops, but no loops of other type than x x x . . . x k − (up to reordering ofvertices), where x , . . . , x k − are distinct. By Proposition 6 we have that Y ∈ E a.a.s.Partition E according to the number of loops into sets E l := { y ∈ E : λ ( y ) = l } , l = 0 , . . . , L. Let G l be the family of those sequences in E l which contain no red loops. Note that G = E consists precisely of those sequences y ∈ S for which H ( y ) is simple.Condition on Y ∈ E and let Y (cid:48) be a sequence obtained from Y by swapping the redloops of Y (if any) with a subset of green proper edges chosen uniformly at random.More formally, let f , . . . , f r be the red loops and e , . . . , e g be the green proper edgesof Y in the order they appear in Y . Pick a set of indices 1 ≤ i < · · · < i r ≤ g uniformly at random, and swap f j with e i j for j = 1 , . . . , r , preserving the order ofvertices inside the edges. Note that this does not change the underlying k -multigraph,that is, H ( Y ) = H ( Y (cid:48) ). 8 roposition 8. Y (cid:48) is uniform on each G l , l = 0 , . . . , L .Proof. Fix l . Clearly Y (cid:48) ∈ G l if and only if Y ∈ E l . Also, Y is uniform on E l . Forinteger r ∈ [0 , l ], every z ∈ G l can be obtained from the same number (say, b r ) of y ’sin E l with exactly r red loops. On the other hand, for every y with exactly r redloops there is the same number (say, a r ) of z ’s in G l that can be obtained from y .Hence for every z ∈ G l P ( Y (cid:48) = z | Y ∈ E l ) = l (cid:88) r =0 b r a r |E l | , which is the same for all z ∈ G l .The following technical result will be used in the next section. Let˜ S := (cid:8) y ∈ S : | ϕ ( y ) − E ϕ ( Y ) | ≤ n / d (cid:9) . Proposition 9. If d = o ( n / ) , then P (cid:16) Y (cid:48) ∈ ˜ S (cid:17) = 1 − o (1) . Proof.
Suppose z is obtained from y by swapping a red loop with a green properedge. This affects the green degree of at most 2 k − v , and for every such v we have (cid:12)(cid:12) (deg green ( y ; v )) − (deg green ( z ; v )) (cid:12)(cid:12) = O ( d ) , uniformly for all such y , z . Hence, uniformly | ϕ ( Y ) − ϕ ( Y (cid:48) ) | = O ( Ld ) , Y ∈ E . By Proposition 6 we have that Y ∈ E a.a.s. Hence, P (cid:16) Y (cid:48) / ∈ ˜ S (cid:17) ≤ P (cid:16) | ϕ ( Y ) − E ϕ ( Y ) | > n / d − O ( Ld ) (cid:12)(cid:12) Y ∈ E (cid:17) ∼ P (cid:16) | ϕ ( Y ) − E ϕ ( Y ) | > n / d − O ( Ld ) (cid:17) . Finally, since d = o ( n / ), the last probability tends to zero by (4). In this section we complete the proof of Theorem 1, deferring proofs of two technicalresults to the next section. By Lemma 7, which we proved in Section 3, the random k -multigraph H ( Y red ) contains H ( k ) ( n, m ) a.a.s. Since H ( k ) ( n, m ) ⊂ H ( Y ) impliesthat H ( k ) ( n, m ) ⊂ H ( Y (cid:48) ), it remains to define a procedure, which, a.a.s. transforms Y (cid:48) y y w s y y w y k − s z z z z z k − s e e f v v x x x x x x k − (a) w y y w s y w y k − s z z z z k − s y ∗ z ∗ v e (cid:48) e (cid:48) e (cid:48) x x x x x x k − (b) Figure 1: Edges affected by a switching (a) before and (b) after.(leaving the red edges of Y (cid:48) intact) into a random k -graph distributed approximatelyas H ( k ) ( n, d ).For this we define an operation which decreases the number of green loops one ata time. Two sequences y ∈ G l , z ∈ G l − are said to be switchable , if z can be obtainedfrom y by the following operation, called a switching , which is a generalization (to k ≥
3) of a switching defined by McKay and Wormald [18] for k = 2. Among theedges of y , choose a loop f and an ordered pair ( e , e ) of green proper edges (seeFigure 1a). Putting s = | e ∩ e | and ignoring the order of the vertices inside theedges, one can write f = vvx . . . x k − , e = w . . . w s y . . . y k − s , e = w . . . w s z . . . z k − s . Loop f contains two copies of v , the left one and the right one (with respect to theirorder in the sequence y ). Select vertices y ∗ ∈ { y , . . . , y k − s } and z ∗ ∈ { z , . . . , z k − s } ,and swap y ∗ with the left copy of v and z ∗ with the right one. The effect of switchingis that f, e , and e are replaced by three proper edges (see Figure 1b): e (cid:48) = e ∪ { v } − { y ∗ } , e (cid:48) = e ∪ { v } − { z ∗ } , e (cid:48) = f ∪ { y ∗ , z ∗ } − { v, v } . A backward switching is the reverse operation that reconstructs y ∈ G l from z ∈ G l − . It is performed by choosing a vertex v , an ordered pair of green properedges e (cid:48) , e (cid:48) containing v , one more green proper edge e (cid:48) , choosing a pair of vertices y, z ∈ e (cid:48) , and swapping y with the copy of v in e and z with the one in e .Note that, given f, e , e , not every choice of y ∗ , z ∗ defines a forward switching, dueto possible creation of new loops or multiple edges. We say that the choices of y ∗ , z ∗ admissible . Similarly a choice of y, z is admissible with respect to v, e (cid:48) , e (cid:48) , and e (cid:48) if it defines a backward switching.Given y ∈ G l , let F ( y ) and B ( y ) be the number of ways to perform forwardswitching and backward switching, respectively.Let Sw denote a (random) operation which, given y ∈ G l , applies to it a forwardswitching, chosen uniformly at random from the F ( y ) possibilities. Let Y (cid:48)(cid:48) ∈ G bethe sequence obtained from Y (cid:48) by applying Sw until there are no loops left, namely, λ ( Y (cid:48) ) times. Suppose for a moment that for every l and y ∈ G l functions F ( y ) and B ( y ) depend on l , but not on the actual choice of y . If this were true, then, asone could easily show, Y (cid:48)(cid:48) would be uniform over G . As we will see, we are notfar from this idealized setting, because Proposition 10(a) below implies that F ( y ) isessentially proportional to l = λ ( y ). On the other hand, Proposition 10(b) showsthat B ( y ) depends on a more complicated parameter of y , namely on ϕ ( y ) definedin Section 2.To make B ( y ) essentially independent of y , we will apply switchings not to everyelement of G ∪ · · · ∪ G L , but to a slightly smaller subfamily. Let˜ G l := G l ∩ ˜ S , l = 0 , . . . , L, where ˜ S has been defined in the previous section.We condition on Y (cid:48) ∈ ˜ S and deterministically map Y (cid:48)(cid:48) to a simple k -graph˜ H ( k ) ( n, d ) := H ( Y (cid:48)(cid:48) ) . Note that switching does not affect the green degrees, and thus does not change thevalue of ϕ . Therefore, if one applies a forward or backward switching to a sequence y ∈ ˜ S , the resulting sequence is also in ˜ S . Moreover, Proposition 9 shows that byrestricting Y (cid:48) to ˜ S , we do not exclude many sequences.The following proposition quantifies the amount by which a single application ofSw distorts the uniformity of Y (cid:48) . Proposition 10. If ≤ d = o ( n / ) , then (a) for y ∈ G l , < l ≤ L , k l ( M − rm ) (cid:18) − O (cid:18) L + d M (cid:19)(cid:19) ≤ F ( y ) ≤ k l ( M − rm ) , (b) for y ∈ G l , ≤ < L , (cid:18) k (cid:19) ( ϕ ( y ) − kLd )( M − rm ) (cid:18) − O (cid:18) L + d M (cid:19)(cid:19) ≤ B ( y ) ≤ (cid:18) k (cid:19) ϕ ( y )( M − rm ) . (cid:48) ) for y ∈ ˜ G l , ≤ < L (cid:18) k (cid:19) E ϕ ( Y )( M − rm ) (cid:18) − O (cid:18) n / d E ϕ ( Y ) + L + d M (cid:19)(cid:19) ≤ B ( y ) ≤ (cid:18) k (cid:19) E ϕ ( Y )( M − rm ) (cid:18) O (cid:18) n / d E ϕ ( Y ) (cid:19)(cid:19) . Finally, we need to show that the final step of the procedure, that is, the mappingof Y (cid:48)(cid:48) to H ( Y (cid:48)(cid:48) ) has negligible influence on the uniformity of the distribution. Forthis, set P H := | H − ( H ) ∩ ˜ G | = (cid:12)(cid:12)(cid:12)(cid:110) y ∈ ˜ G : H ( y ) = H (cid:111)(cid:12)(cid:12)(cid:12) , H ∈ H ( k ) ( n, d ) . Proposition 11. If d = o ( n / ) , then uniformly for every H ∈ H ( k ) ( n, d )(1 − o (1)) M !( k !) M ≤ P H ≤ M !( k !) M . Proofs of Propositions 10 and 11 can be found in Section 6.
Lemma 12.
There is a sequence ε n = o (1) such that for every H ∈ H ( k ) ( n, d ) P (cid:16) ˜ H ( k ) ( n, d ) = H (cid:17) = (1 ± ε n ) |H ( k ) ( n, d ) | − . Proof.
Clearly it is enough to show that for some function p = p ( n, l ) we have P (cid:16) ˜ H ( k ) ( n, d ) = H | Y (cid:48) ∈ ˜ G l (cid:17) = (1 + o (1)) p ( n, l ) (6)uniformly for l ≤ L and H ∈ H ( k ) ( n, d ). Indeed, P (cid:16) ˜ H ( k ) ( n, d ) = H (cid:17) = L (cid:88) l =0 P (cid:16) ˜ H ( k ) ( n, d ) = H | Y (cid:48) ∈ ˜ G l (cid:17) P (cid:16) Y (cid:48) ∈ ˜ G l (cid:17) = (1 + o (1)) p ( n ) , where p ( n ) := (cid:80) l p ( n, l ) P ( Y (cid:48) ∈ ˜ G l ) is independent of H .Let F l = k l ( M − rm ) and B = (cid:0) k (cid:1) E ϕ ( Y )( M − rm ) be the asymptotic valuesof the bounds in Proposition 10, (a) and (b’), respectively.By Proposition 8, we can treat Y (cid:48) as a uniformly chosen element of ˜ G l = G l ∩ ˜ S .Every realization of l switchings that generate Y (cid:48)(cid:48) produces a trajectory ( y ( l ) , . . . , y (0) ) ∈ ˜ G l × · · · × ˜ G , where y ( k ) is switchable with y ( k − for k = 1 , . . . , l . The probability that a particularsuch trajectory occurs is1 | ˜ G l | F ( y ( l ) ) . . . F ( y (1) ) = (cid:18) O (cid:18) L + d M (cid:19)(cid:19) l | ˜ G l | − l (cid:89) i =1 F − i = (1 + o (1)) | ˜ G l | − l (cid:89) i =1 F − i , (7)12he first equality following from Proposition 10.On the other hand, by Propositions 10 and 11 the number of trajectories that leadto a particular H ∈ H ( k ) ( n, d ) is P H B l (cid:18) O (cid:18) n / d E ϕ ( Y ) + L + d M (cid:19)(cid:19) l = (1 + o (1)) M !( k !) M B l , (8)because E ϕ ( Y ) = Θ( nd ) by (3). Now the estimate (6) with p ( n, l ) = M !( k !) M B l | ˜ G l | − l (cid:89) i =1 F − i follows by multiplication of (7) and (8). Proof of Theorem 1.
Let µ be a uniform distribution over H ( k ) ( n, d ) and ν be thedistribution of ˜ H ( k ) ( n, d ), that is µ ( H ) = |H ( k ) ( n, d ) | − , ν ( H ) = P (cid:16) ˜ H ( k ) ( n, d ) = H (cid:17) , H ∈ H ( k ) ( n, d ) . By Lemma 12 the total variation distance between the measures µ and ν isd T V ( µ, ν ) := 12 (cid:88) H ∈H ( k ) ( n,d ) | µ ( H ) − ν ( H ) | ≤ (cid:88) H ε n µ ( H ) = o (1) . Therefore a standard fact from probability theory (see, e.g., [1, p. 254]) implies thatthere is a joint distribution of ˜ H ( k ) ( n, d ) and H ( k ) ( n, d ) such that˜ H ( k ) ( n, d ) = H ( k ) ( n, d ) a.a.s. (9)By definition of ˜ H ( k ) ( n, d ), if H ( k ) ( n, m ) ⊂ H ( Y red ), then H ( k ) ( n, m ) ⊂ ˜ H ( k ) ( n, d ).Therefore, Theorem 1 follows by Lemma 7 and Proposition 9. Proof of Proposition 10. (a) The upper bound follows from the fact that after wechoose (in one of at most l ( M − rm ) ways) a loop and two green edges, we have atmost k admissible choices of vertices y ∗ and z ∗ .We say that two edges e (cid:48) , e (cid:48)(cid:48) of a k -graph are distant from each other if they donot intersect and there is no third edge e (cid:48)(cid:48)(cid:48) that intersects both e (cid:48) and e (cid:48)(cid:48) . Note thatfor any edge e there are at most k d edges not distant from e .For the lower bound, let us estimate the number of triples ( f, e , e ) for which wehave exactly k admissible choices of y ∗ , z ∗ . For this it is sufficient that e ∩ e = ∅ and both e , e are distant from f in H ( y ). Given f , we can choose such e in at least13 − rm − l − k d = ( M − rm )(1 − O (( L + d ) /M )) ways and then choose such e in at least M − rm − l − k d − kd = ( M − rm )(1 − O (( L + d ) /M )) ways. Hencethe lower bound.(b) We can choose a vertex v ∈ [ n ] and an ordered pair of edges e (cid:48) , e (cid:48) containing v in at most ϕ ( y ) ways and then choose e (cid:48) in at most M − rm ways. Number ofadmissible choices of vertices y, z ∈ e (cid:48) is at most (cid:0) k (cid:1) , which gives the upper bound.For the lower bound, we estimate the number of quadruples v, e (cid:48) , e (cid:48) , e (cid:48) for whichthere are exactly (cid:0) k (cid:1) admissible choices of y, z . For this it is sufficient that e (cid:48) isdistant from both e (cid:48) and e (cid:48) in H ( y ). The number of ways to choose v, e (cid:48) , e (cid:48) is exactly (cid:88) v ∈ [ n ] (cid:0) deg (cid:48) green ( y ; v ) (cid:1) , (10)where deg (cid:48) green ( y ; v ) is the number of green proper edges containing vertex v . It isobvious that (10) is at most ϕ ( y ) and, as one can easily see, at least ϕ ( y ) − kLd .The lower bound now follows, since, given v, e (cid:48) , e (cid:48) , we can choose e (cid:48) in at least M − rm − l − k d = ( M − rm )(1 − O (( L + d ) /M ) ways.(b (cid:48) ) Immediate from (b) and the definition of ˜ G l . Proof of Proposition 11.
The upper bound is just | H − ( H ) | . For the lower bound, welet Y | H be a sequence chosen uniformly at random from H − ( H ) and show that theprobabilities P (cid:0) | ϕ ( Y | H ) − E ϕ ( Y ) | > n / d (cid:1) , H ∈ H ( k ) ( n, d ) , uniformly tend to zero. Since ϕ does not depend on the order of vertices inside theedges of Y , we can treat Y | H as a random permutation of the M edges of H , whichwe denote by e , . . . , e M . Since H is simple, we have ϕ ( Y | H ) = (cid:88) v ∈ [ n ] (cid:88) e i ,e j (cid:51) vi (cid:54) = j I { e i ,e j are green in Y | H } , whence E ϕ ( Y | H ) = n ( d ) ( M − rm ) ( M ) . Therefore (3) and simple calculations yield E ϕ ( Y | H ) − E ϕ ( Y ) = O ( nd M − ) = O ( d ) . Further, if y , z ∈ H − ( H ) and z can be obtained from y by swapping two edges, then | ϕ ( y ) − ϕ ( z ) | = O ( d ) , y and z . Therefore (2) applies to f = ϕ with N = M and b = O ( d ). To sum up, P (cid:0) | ϕ ( Y | H ) − E ϕ ( Y ) | > n / d (cid:1) ≤ P (cid:0) | ϕ ( Y | H ) − E ϕ ( Y | H ) | > n / d − O ( d ) (cid:1) ≤ (cid:40) − (cid:0) n / d − O ( d ) (cid:1) O ( M d ) (cid:41) = o (1) , the equality following from the assumption d = o ( n / ). Remark . Theorem 1 is closely related to a result of Kim and Vu [14], who proved,for d growing faster than log n but slower than n / / log n , that there is a jointdistribution of H (2) ( n, p ) and H (2) ( n, d ) with p satisfying p ∼ d/n so that H (2) ( n, p ) ⊂ H (2) ( n, d ) a.a.s. (11)It is known (see, e.g., [4]) that H (2) ( n, p ) is a.a.s. Hamiltonian, when the expected de-gree ( n − p grows faster than log n . Therefore (11) implies an analogue of Corollary 4for graphs. Remark . In [11] the authors used the same switching as in the present paper tocount d -regular k -graphs approximately for k ≥ ≤ d = o ( n / ) as well asfor k ≥ d = o ( n ). The application of the technique is somewhat easier there,because there is no need to preserve the red edges. The restriction d = o ( n / )that appears in Theorem 1 has also a natural meaning in [11], since the countingformula there gives the asymptotics of the probability p n,d := P ( H ( k ) ∗ ( n, d ) is simple)for d = o ( n / ), while for k ≥ n / ≤ d = o ( n ) it just gives the asymptotics oflog p n,d . Remark . The lower bound on d in Theorem 1 is necessary because the secondmoment method applied to H ( k ) ( n, p ) (cf. Theorem 3.1(ii) in [3]) and asymptoticequivalence of H ( k ) ( n, p ) and H ( k ) ( n, m ) yields that for d = o (log n ) and m ∼ cM there is a sequence ∆ = ∆( n ) such that d = o (∆) and the maximum degree H ( k ) ( n, m )is at least ∆ a.a.s. Remark . For d greater than log n , however, the degree sequence of H ( k ) ( n, p ) isclosely concentrated around the expected degree. Therefore it is plausible that Theo-rem 1 can be extended to d greater than n / . However, n / seems to be an obstaclewhich cannot be overcome without a proper refinement of our proof. Remark . In view of Remark 3, our approach cannot be extended to d = O (log n ).Nevertheless, we believe that the following extension of Corollary 4 is valid.15 onjecture . For every k ≥ there is a constant d = d ( k ) such that for any d ≥ d , H ( k ) ( n, d ) contains a loose Hamilton cycle a.a.s. Recall that Robinson and Wormald [19, 20] proved for k = 2 that as far as fixed d is considered, it suffices to take d ≥
3. Their approach is based on a very carefulanalysis of variance of a random variable counting the number of Hamilton cyclesin the configuration model. Unfortunately, for k ≥ Remark . In this paper, we were concerned only with loose cycles. One can alsoconsider a more general problem. Define an (cid:96) -overlapping cycle as a k -uniform hy-pergraph in which, for some cyclic ordering of its vertices, every edge consists of k consecutive vertices, and every two consecutive edges (in the natural ordering of theedges induced by the ordering of the vertices) share exactly (cid:96) vertices. (Clearly, (cid:96) = 1corresponds to loose cycles.) The thresholds for the existence of (cid:96) -overlapping Hamil-ton cycles in H ( k ) ( n, p ) have been recently obtained in [8]. However, proving similarresults for H ( k ) ( n, d ) and arbitrary (cid:96) ≥ Conjecture . For every k > (cid:96) ≥ if d (cid:29) n (cid:96) − , then H ( k ) ( n, d ) contains an (cid:96) -overlapping Hamilton cycle a.a.s. References [1] A. D. Barbour, L. Holst, and S. Janson.
Poisson approximation , volume 2 of
Oxford Studies in Probability . The Clarendon Press Oxford University Press,New York, 1992. Oxford Science Publications.[2] E. A. Bender and E. R. Canfield. The asymptotic number of labeled graphs withgiven degree sequences.
J. Combinatorial Theory Ser. A , 24(3):296–307, 1978.[3] B. Bollob´as. A probabilistic proof of an asymptotic formula for the number oflabelled regular graphs.
European J. Combin. , 1(4):311–316, 1980.[4] B. Bollob´as.
Random graphs , volume 73 of
Cambridge Studies in Advanced Math-ematics . Cambridge University Press, Cambridge, second edition, 2001.[5] B. Bollob´as and A. M. Frieze. On matchings and Hamiltonian cycles in randomgraphs. In
Random graphs ’83 (Pozna´n, 1983) , volume 118 of
North-HollandMath. Stud. , pages 23–46. North-Holland, Amsterdam, 1985.[6] V. Chv´atal. Almost all graphs with 1 . n edges are 3-colorable. Random Struc-tures Algorithms , 2(1):11–28, 1991. 167] C. Cooper, A. Frieze, and B. Reed. Random regular graphs of non-constantdegree: connectivity and Hamiltonicity.
Combin. Probab. Comput. , 11(3):249–261, 2002.[8] A. Dudek and A. Frieze. Tight Hamilton cycles in random uniform hypergraphs.To appear in
Random Structures Algorithms. [9] A. Dudek and A. Frieze. Loose Hamilton cycles in random uniform hypergraphs.
Electron. J. Combin. , 18(1):Paper 48, pp. 14, 2011.[10] A. Dudek, A. Frieze, P.-S. Loh, and S. Speiss. Optimal divisibility conditions forloose Hamilton cycles in random hypergraphs.
Electron. J. Combin. , 19(4):Paper44, pp. 17, 2012.[11] A. Dudek, A. Frieze, A. Ruci´nski, and M. ˇSileikis. Approximate counting ofregular hypergraphs. http://arxiv.org/abs/1303.0400.[12] A. Frieze. Loose Hamilton cycles in random 3-uniform hypergraphs.
Electron.J. Combin. , 17(1):Note 28, pp. 4, 2010.[13] S. Janson, T. (cid:32)Luczak, and A. Rucinski.
Random graphs . Wiley-Interscience Seriesin Discrete Mathematics and Optimization. Wiley-Interscience, New York, 2000.[14] J. H. Kim and V. H. Vu. Sandwiching random graphs: universality betweenrandom graph models.
Adv. Math. , 188(2):444–469, 2004.[15] D. E. Knuth. Big omicron and big omega and big theta.
SIGACT News , 8(2):18–24, Apr. 1976.[16] M. Krivelevich, B. Sudakov, V. H. Vu, and N. C. Wormald. Random regulargraphs of high degree.
Random Structures Algorithms , 18(4):346–363, 2001.[17] C. McDiarmid. Concentration. In
Probabilistic methods for algorithmic discretemathematics , volume 16 of
Algorithms Combin. , pages 195–248. Springer, Berlin,1998.[18] B. D. McKay and N. C. Wormald. Uniform generation of random regular graphsof moderate degree.
J. Algorithms , 11(1):52–67, 1990.[19] R. W. Robinson and N. C. Wormald. Almost all cubic graphs are Hamiltonian.
Random Structures Algorithms , 3(2):117–125, 1992.[20] R. W. Robinson and N. C. Wormald. Almost all regular graphs are Hamiltonian.
Random Structures Algorithms , 5(2):363–374, 1994.[21] N. C. Wormald. Models of random regular graphs. In
Surveys in combinatorics,1999 (Canterbury) , volume 267 of