aa r X i v : . [ m a t h . C O ] J u l Packing perfect matchings in random hypergraphs
Asaf Ferber ∗ Van Vu † August 16, 2018
Abstract
We introduce a new procedure for generating the binomial random graph/hypergraph models,referred to as online sprinkling . As an illustrative application of this method, we show thatfor any fixed integer k ≥
3, the binomial k -uniform random hypergraph H kn,p contains N :=(1 − o (1)) (cid:0) n − k − (cid:1) p edge-disjoint perfect matchings, provided p ≥ log C nn k − , where C := C ( k ) is aninteger depending only on k . Our result for N is asymptotically optimal and for p is optimal upto the polylog ( n ) factor. This significantly improves a result of Frieze and Krivelevich. Since its introduction in 1960 [4], the Erd˝os-R´enyi random graph/hypergraph model has been oneof the main objects of study in probabilistic combinatorics. Given p ∈ [0 ,
1] and k ∈ N , the random k -uniform hypergraph model H kn,p is defined on a vertex set [ n ] := { , . . . , n } , obtained by pickingeach k -tuple e ∈ (cid:0) [ n ] k (cid:1) to be an edge independently with probability p . The case k = 2 reduces to thestandard binomial graph model, denoted as G n,p .A useful technique in the theory of random graphs is the multiple exposure technique (also referredto as sprinkling ). Given p , . . . , p ℓ ∈ [0 ,
1] for which Q ℓi =1 (1 − p i ) = 1 − p , one can easily show thata hypergraph H kn,p has the same distribution as a union of independently generated hypergraphs H = H ∪ . . . ∪ H ℓ , where for each i , H i = H kn,p i (for more details, the reader is referred to [3], [8] or[16] for a more relevant approach). Indeed, note that the probability for a fixed k -tuple e ∈ (cid:0) [ n ] k (cid:1) to not appear in ∪ i E ( H i ) is exactly Q ℓi =1 (1 − p i ) = 1 − p , and clearly, all the choices are being madeindependently.The power of this technique comes from the ability to “keep some randomness” in cases where aniterative approach is convenient. A typical scenario in applications is to expose H kn,p in stages, wherein each stage, a hypergraph H i = H kn,p i is being generating, independently at random from all thepreviously exposed hypergraphs. Our goal is to show that in each stage j , the current hypergraph ∪ i ≤ j H i gets closer to a target graph property P , until in stage ℓ it satisfies it. This techniquehas became standard over the years and is being used in almost every paper dealing with randomgraphs/hypergraphs (for a very nice and classical example, the reader is referred to [2] and [14]). ∗ Department of Mathematics, Yale University, and Department of Mathematics, MIT. Emails: [email protected],and [email protected]. † Department of Mathematics, Yale University. Email: [email protected]. Supported by NSF grant DMS-1307797and AFORS grant FA9550-12-1-0083.
1n this paper we want to consider a slightly different perspective of the sprinkling method, whichgives it a bit more power. Before describing it, let us have a closer look at the way a hypergraph H kn,p is being generated. By definition, for every k -tuple e ∈ (cid:0) [ n ] k (cid:1) , we query whether e ∈ E ( H ) withprobability p , independently at random. A natural question arises is: Question 1.1.
Does the order of the edge queries matter ?
Clearly, the answer is “no”, as long as all the queries are being made independently at random,and this observation serves as the basis for our technique.Our goal is to create a randomized algorithm that whp (with high probability, that is, withprobability tending to 1 as n tends to infinity) finds a large structure S in H kn,p . We aim to find thetarget structure as a subgraph of the “online generated” random hypergraph H . That is, during theexecution of the algorithm, a random hypergraph is generated and the target structure is constructedtogether step by step . We refer to this technique as online sprinkling .The way the algorithm works is as follows: in each time step i of the algorithm, a subset E i ⊆ (cid:0) [ n ] k (cid:1) is being chosen according to some distribution. Then, we query every edge in E i (independently)with some probability p i , which is also being chosen according to some distribution. All the chosenedges will be part of the randomly generated hypergraph.For each k -tuple e ∈ (cid:0) [ n ] k (cid:1) , let ω ( e ) = 1 − Y i : e ∈ E i (1 − p i )be the weight of e at the end of the algorithm. Note that ω ( e ) is a random variable (as our algorithmis a randomized one), and corresponds to the probability for e to appear in the hypergraph obtainedat the end of the algorithm. Clearly, if ω ( e ) ≤ p for each k -tuple e , then the resulting hypergraphcan be coupled as a subgraph of H kn,p .The power of this approach comes from the flexibility in defining the sets E i and the edge-probabilities p i . By selecting these sets and probabilities properly, we can govern the process towardsour goal.As an illustrative example for this approach, we examine the problem of finding edge-disjointcopies of some given structure S in H kn,p . In particular, in this paper we consider only the casewhere S is a perfect matching and k ≥
3, as here, many technical issues that may appear for otherstructures S or for the case k = 2 will become trivial (more complicated applications will appear infollowup papers).The problem of finding the threshold behavior for the appearance of a perfect matching in arandom hypergrpah is notoriously hard and is a central problem in probabilistic combinatorics,known as Shamir’s Conjecture. The main difficulty is the lack of general tools such as the classicaltheorem by Hall (see e.g.,[17]) for finding perfect matchings. This problem was solved by Johansson,Kahn and Vu [9], who showed that a perfect matching typically appears in H kn,p as soon as p ≥ C log nn k − (note that n must be divisible by k , as otherwise a perfect matching cannot exist). Once Shamir’sConjecture has been settled, it is thus natural to ask for edge-disjoint perfect matchings covering“most” of the edges. This problem has been considered by Frieze and Krivelevich in [6], wherethey showed, among other things, that one can pack “most” of the edges of a typical H kn,p withperfect matchings, as long as p > log n/n . Moreover, they showed that there embedding can beapplied on a pseudorandom model with the same density. Considering only the random model, inthe following theorem we significantly improve their result to the optimal (up to a polylog ( n ) factor)edge-probability. 2 heorem 1.2. Let k ≥ be a positive integer and let p ≥ log k nn k − . Then, whp H kkn,p contains t := (1 − o (1)) (cid:0) kn − k − (cid:1) p edge-disjoint perfect matchings. Remark 1.3.
We would like to give the following remarks: • The case k = 2 is a bit more complicated to handle using our technique, and in fact better toolsare known for this case (generalizations of Hall’s Theorem for finding “many” edge-disjointperfect matchings). For a non-trivial example of applying the “online sprinkling” technique forgraphs, the reader is referred to [5]. • Our p is optimal up to a polylog ( n ) factor and t is asymptotically optimal. In fact, as we explainbellow, our proof strategy will always yield a lost of some log ’s in p , and therefore we do notput any effort in optimizing its power. Even though, we believe that the same conclusion shouldhold for every edge-probability p which is asymptotically larger than the threshold behavior. • Our proof heavily relies on the ability to embed one perfect matching (that is, on the result from[9]), and does not provide an alternative proof for that.
For some technical reasons, it will be more convenient for us to work in a k -partite model.Let H kn × k,p be a random k -partite, k -uniform hypergraph, with parts V , . . . , V k , each of which ofsize n , obtained by adding each possible k -tuple e ∈ V × V × . . . V k as an edge with probability p , independently at random. We prove the following, seemingly weaker, statement about findingedge-disjoint perfect matchings in H kn × k,p , and then show how to to derive Theorem 1.2 in a quitestraightforward way. Theorem 1.4.
Let k ≥ be a positive integer and let p ≥ log k nn k − . Then, whp a hypergraph H kn × k,p contains (1 − o (1)) n k − p edge-disjoint perfect matchings. Organization of the paper.
In Section 2 we provide with a brief outline of the general strategyfor proving Theorem 1.4, explaining the difficulties one may run into while using the “online sprin-kling” technique. In Section 3 we show how to derive Theorem 1.2 from Theorem 1.4. In Section4 we present some tools and auxiliary lemmas to be used in the proof of Theorem 1.4. Lastly, inSection 5 we prove our main result, namely Theorem 1.4.
Our proof, in large, is divided into two main phases. In Phase 1 we wish to find the “correctnumber” of edge-disjoint matchings which are not complete, where in Phase 2 we wish to “complete”each of which into a perfect matching in an edge-disjoint way (and this will be done using theJohansson, Kahn and Vu’s result [9]). So far, our proof strategy is not new, and in fact the exactsame strategy has been used in many papers during the years (perhaps the most impressive recentresult obtained by a similar outline is the one of Keevash [10], where he solved a problem from the19th century). The main idea behind it is that, usually, it is much easier to find “almost spanning”structures than “spanning” ones, and if one can embed the almost spanning substructure “nicely”then there is a hope to complete it to desired spanning structure. Bellow, we give a brief descriptionof each of the two phases, and explain the difficulties we should overcome during the formal proof.
Phase 1.
The way we handle the “almost spanning structure” is more or less identical to the“nibbling” idea, introduced by Ajtai-Komlos-Szemer´edi [1] and R¨odl [15]. Roughly speaking, we3plit Phase 1 into N Rounds , where each round is being further divided into
Steps . In each Round i ,our goal is to find a “large” matching M i . In order to do so, we start with an empty matching M i ,and in each Step j , we extend the current matching M i ( j − by a “bit”, while exposing edges whichare vertex disjoint to M i ( j − . Note that the rounds run independently, while completely ignoringthe history. We later show (Lemma 5.3) that if p is not too large, then this procedure gives usedge-disjoint matchings whp (we then show how to deal the case where p is large).Let us focus in one round. The main observation here is that if we expose edges with “relativelysmall” probability, then one can easily show (Lemma 5.4) that “most” of them form a matching(edges which are overlapped with other edges will be just ignored). It is worth mentioning thatthe nibbling approach is typically being applied in a deterministic setting where a “nicely behaved”(hyper)graph is given. Then, by sampling “not too many” edges, one can easily show that most ofthem form a matching. Therefore, most of the work is focused in showing that the remaining set ofedges is still “nicely behaved”. In our setting, as we expose the hypergraph in an online fashion, wewill obtain it for free.A crucial point during this phase, is that, as we show, due to symmetry, each M i is actually amatching chosen uniformly at random. Letting U i := V ( H ) \ ∪ M i , we obtain N sets ( U i ) Ni =1 , eachof which is a random subset chosen according to a uniform distribution. This fact will be useful inPhase 2.The main problem in Phase 1 is to show that no edge has accumulated “too much” weight.Namely, let p , . . . , p s to denote all the edge-probabilities used during the algorithm in order to“expose” a particular k -tuple e , we wish to show that 1 − Q si =1 (1 − p i ) ≤ (1 − ε/ p . Assuming this,we obtain a natural coupling between the hypergraph which has been generated in this phase and asubhypergraph of H n, (1 − ε/ p . Phase 2.
In this phase our goal is to complete the matchings into perfect matchings in an edge-disjoint way. To this end, for each 1 ≤ i ≤ N , we expose all the k -tuples in U i with probability q = log n | U i | k − . Then, the main result of [9] ensures us a perfect matching in U i whp (for all i simultaneously).Note that as | U i | is going to be relatively small (some natural restriction apply during the proof),it follows that one cannot hope to get the “correct” edge-probability from our proof and there willalways be a lose of few logs. Now, adding such a matching to M i yields a perfect matching of H . Itthus remain to show that the matchings are disjoint (Lemma 5.3) and that none of the “new added”edges accumulated more than a weight of (say) εp/
3. Assuming that, there is a natural couplingbetween the “new” hypergraph and a subhypergraph of H kn,εp/ , and therefore the union of the twohypergraphs generated in both phases, has the same distribution as a subgraph of H kn,p . This willcomplete the proof. Proof.
Let t = log . n , and take t partitions [ kn ] := V ( i )1 ∪ . . . ∪ V ( i ) k with parts of size precisely n ,independently, uniformly at random. For each k tuple e ∈ (cid:0) [ kn ] k (cid:1) , let us define the set of relevant partitions for e as R e := { i ≤ t : e ∩ V ( i ) j = ∅ for all j ≤ k } . Note thatPr [ i ∈ R e ] = k ! /k k = Θ(1) , i = j , the events “ i ∈ R e ” and “ j ∈ R e ” are independent. Therefore, by Chernoff’sbounds one obtains that with probability 1 − e − Θ( t ) R e = (1 + o (1))( k ! /k k ) t =: r holds for every k ∈ (cid:0) [ kn ] k (cid:1) .Now, expose all the k -tuples with probability p , independently at random, and for each tuple e ∈ E ( H ), let f ( e ) ∈ R e be a uniformly random element. For each i ≤ t , let H i be the k -partitehypergraph with parts V ( i )1 × . . . × V ( i ) k obtained by taking all the edges E i := { e ∈ E ( H ) : f ( e ) = i } ,and note that H i = H kn × k, (1 − o (1)) p/r (although for i = j H i and H j are not independent!) and thatfor i = j , E ( H i ) ∩ E ( H j ) = ∅ .Fixing an i , by Theorem 1.4 it follows that whp H i contains m = (1 − o (1)) n k − p/r edge-disjointperfect matchings. Therefore, by applying Markov’s inequality, we obtain that for all but o ( t ) manyindices 1 ≤ i ≤ t , H i contains m edge-disjoint perfect matchings.All in all, we obtain that whp H contains at least ( t − o ( t )) m = (1 − o (1)) (cid:0) n − k − (cid:1) p edge-disjointperfect matchings as required. This completes the proof. In what follows, we present some tools that will be useful in our proofs.
A key ingredient in our proof is the following k -partite version of the main result in [9] which isobtained by a straightforward modification of its proof (a full proof can be found in [7]). Theorem 4.1.
Let k be a positive integer and let p = ω (cid:16) log nn k − (cid:17) . Then, with probability at least − n − ω (1) , a hypergraph H kn × k,p contains a perfect matching. We make use of the following concentration result from [11] (Theorem 2.5).
Theorem 4.2.
Let X , . . . , X t be independent random variables, with a k ≤ X k ≤ b k for each k , forsuitable a k and b k . Let S t := P X k and let µ := E [ S t ] . Then, for each λ ≥ , Pr [ | S t − µ | ≥ λ ] ≤ e − λ / P ( b k − a k ) . We also use the following version of Talagrand’s inequality [13] (we remark that in fact, strongerversions exist, see e.g. [12], with weaker assumptions on the constants in the bounds bellow, but thefollowing version suffices for our needs).
Theorem 4.3.
Let X be a non-negative random variable, not identically , which is determined by n independent trials T , . . . , T n , and satisfying the following for some c, r > : ( i ) changing the outcome of any one trial can affect X by at most c , and ii ) for any s , if X ≥ s then there is a set of at most rs trials whose outcomes certify that X ≥ s .Then for any ≤ t ≤ E ( X ) , Pr h | X − E ( X ) | > t + 60 c p r E ( X ) i ≤ (cid:18) − t c r E ( X ) (cid:19) . In this section we prove the following, seemingly weaker statement. Then, we show how to deriveTheorem 1.4 by a simple application of Markov’s inequality.
Theorem 5.1.
Let k ≥ be a positive integer and let log k nn k − ≤ p ≤ log k /n k − . Then, whp ahypergraph H kn × k,p contains (1 − o (1)) n k − p edge-disjoint perfect matchings.Proof of Theorem 5.1. Let p be as in the assumption of the theorem. Let ε >
0, and let δ > β = 10 kδ , α = n , and ℓ be an integer such that (1 − δ + β ) ℓ = α (we omit flooring and ceiling signs as all of our proofs are asymptotic and this will not harm ourcalculations).We describe a randomized algorithm for generating a subhypergrpah H ′ of H kn × k,p which consistsof N := (1 − ε ) n k − p edge-disjoint perfect matchings M , . . . , M N − . Moreover, we show that thealgorithm succeeds with a sufficiently high probability, as required in the statement. As describedin the outline (Section 2), our algorithm is divided into the following two main phases. Phase 1.
Building N edge-disjoint matchings, each of which of size (1 − α ) n . Phase 2.
Completing each of the matchings into a perfect matching, keeping all of them edge-disjoint.
Phase 1. is in fact the heart of the proof and contains all the ideas which are needed for us. Wedivide Phase 1 into N rounds , where in each round i we find a matching M i which does not useedges from S j
1, round i is divided into ℓ steps , and for every i and j werefer to the j -th step of the i -th round as time step ij . In each time step ij we form a matching M ij by adding a small “bite” to a previous matching M i ( j − , until a matching M i := M iℓ of size(1 − α ) n is obtained. Initially, we set j = − ≤ i < N we set M ij := ∅ . In order tobuild the M ij s we expose “relevant” edges with a carefully chosen probability q ij (to be determinedthroughout the algorithm).Before giving a formal description of the algorithm we introduce some useful notation. An edge e ∈ V × . . . × V k is called relevant at time step ij if e ∩ V ( M i ( j − ) = ∅ . That is, if none of its verticesis incident with an edge of M i ( j − (note that we completely ignore the fact that few of those edgesmay belong to other matchings). In each time step ij , for every 1 ≤ m ≤ k , let U ijm := V m \ V ( M i ( j − )be the subset of the uncovered vertices of V m and observe that all these sets are of the exact samesize n j := n − | M i ( j − | . Let R ij denote the set of all relevant edges at time step ij , and note that R ij corresponds to a complete k -partite hypergraph with U ij . . . U ijk as its parts. The algorithm
For i = 0 , . . . , N − j = 0 , . . . , ℓ − R ij with color ij , independently, with probability q ij := δn − ( k − j (note that an edge can be6ssigned with more than one color). Among all the edges colored ij , choose a matching M of sizeexactly ( δ − β ) n j , uniformly at random. Then, update M ij := M i ( j − ∪ M . If such a matching doesnot exist, then the algorithm reports an error and terminates.Before we analyze to algorithm, let us make the following easy observation: Observation 5.2.
Throughout the algorithm, assuming that it does not terminate, for every ≤ j ≤ ℓ − we have n j = (1 − δ + β ) j n .Proof. Since for every j ≤ ℓ −
1, in time step ij we enlarge M i ( j − by ( δ − β ) n j , it follows that n j +1 = n j − ( δ − β ) n j = (1 − δ + β ) n j . The observation now follows by a simple induction.First, we show that if the algorithm does not terminate, then whp all the obtained matchings areedge-disjoint. Claim 5.3.
All the M i -s are edge disjoint whp.Proof. Recall that for each i , M i is formed by edges which are colored ij for some j ≤ ℓ and that ineach time step ij an edge is colored ij with probability q ij ≤ p . Moreover, note that if at the endof the process each edge is assigned with at most one color, then clearly the matchings are disjoint.Therefore, it will be enough to show that the probability for the existence of an edge e ∈ V × . . . × V k which is being assigned at least two colors is o (1).To this end, observe that since (1 − δ + β ) ℓ = α , since δ and β are constants and α = 1 / log n ,it follows that ℓ = O (log log n ). Moreover, there are T := N ℓ = O ( n k − p log log n ) = polylog ( n )many time steps (recall that we assume an upper bound on p ), where in each time step ij , we coloredges with probability q ij ≤ p (Observation 5.2), and the time steps are independent. Therefore, theprobability that there exists an edge which is being colored at least twice is at most n k T p = n k polylog ( n ) n k − = polylog ( n ) n k − . Since k ≥
3, the result follows.Note that this is the only place where we use the fact that k ≥
3. For k = 2 there are few overlappsbetween the matchings and it requires a bit more careful treatment. For an example illustrating howto deal with it, the reader is referred to [5].Second, we show that whp the algorithm described above does not terminate and that the sets U m as defined in the algorithm enjoys a uniform distribution. Claim 5.4.
For every ≤ i ≤ N − and j < ℓ , at time step ij , with probability − n − ω (1) we havea matching of size at least ( δ − β ) n j . Moreover, by picking such a matching M uniformly at random,for every ≤ m ≤ k we have that M ∩ U ijm is a subset of U ijm of size ( δ − β ) n j , chosen according toa uniform distribution.Proof. In order to prove the first part of Claim 5.4 we make use of Theorem 4.3. Note that in eachtime step ij , the color class ij (that is, the set of all edges which have been colored ij during thealgorithm) is distributed as H kn j × k,q ij , with q ij = δn k − j and n j ≥ αn . Therefore, it is enough to showthat the probability for H ′ = H km × k,δ/m k − not to have a matching of size ( δ − β ) m is m − ω (1) , forevery m ≥ αn . 7et X be the random variable corresponds to the size of the maximal matching we have in H ′ ,and let T e , e ∈ R ij be independent indicator random variables for the events “ e ∈ E ( H ′ )”. Note that X is determined by the T e -s and that it trivially satisfies ( i ) and ( ii ) of Theorem 4.3 with respect to c = r = 1. Therefore, we havePr h | X − E ( X ) | > t + 60 c p r E ( X ) i ≤ (cid:18) − t c r E ( X ) (cid:19) . (1)Now, for each e ∈ R ij we say that e is an isolated edge in H ′ if e ∈ E ( H ′ ) and all the vertices in e have degree exactly 1 in H ′ . Therefore, we havePr [ e is isolated] = δm k − (1 − δm k − ) m k − ( m − k ≥ δm k − (cid:18) − δm k − ( m k − ( m − k ) (cid:19) = δm k − (cid:18) − δm k − ( km k − + O ( m k − )) (cid:19) ≥ ( δ − β/ m − ( k − , where here we made use of the facts (1 − x ) n ≥ − nx for all x > − m k − ( m − k = km k − + O ( m k − ).All in all, we obtain that E ( X ) ≥ ( δ − β/ m. The result now easily follows by plugging this estimate into (1) with (say) t = β E ( X ) /
10, using thefact that m ≥ αn = n/ log n .For the second part of the claim, note that one can relabel the vertices of each U ijm accordingto a permutation π m : U ijm → U ijm , chosen uniformly, independently at random. Then, after pickingthe desired matching M , one can assign each vertex v with the “original” label by applying π − m ( v ).Clearly, this procedure gives as a uniformly chosen subset of U ijm , for every m , as desired.For every i , let us denote by U i := V × . . . × V k \ V ( M i ), to be the set of all vertices which are uncovered by the matching M i . Observe that U i := S × . . . × S k for some S j ⊆ V j , each of which isof size exactly αn . The following claim which follows almost immediately from Claim 5.4 and willserve us in Phase 2. Claim 5.5.
At the end of the algorithm, whp we have that for every e ∈ V × . . . × V k , the numberof indices i for which e ∈ U i is at most α k N .Proof. Let e ∈ U i , where U i := S × . . . × S k as described above. By Claim 5.4, we conclude thateach of the S m , 1 ≤ m ≤ k , is a subset of V m of size precisely αn , chosen uniformly, independently atrandom. Therefore, the probability for e ∈ U i is α k , and the expected number of i s for which e ∈ U i is α k N . Since the rounds run independently, by Chernoff’s bounds we obtain that the probability of e to be in more than 2 α k N such U i s is at most e − Θ( α k N ) = e − Θ( α k n k − p ) = o ( n − k ) . Note that here we make use of the fact that p = log C n/n k − , where C depends on k . Therefore, bytaking the union bound over all possible e ∈ V × . . . × V k , we obtain the desired.8o conclude, let H ′ be the hypergraph consisting of all the edges which have received any color dur-ing the algorithm. We show that indeed H ′ can be coupled as a subhypergraph of H = H kn × k, (1 − ε/ p .To this end, it will be convenient to introduce some notation. For every e ∈ V × . . . × V k , let usdefine R ( e ) := { ij : e ∈ R ij } (note that R ( e ) is a random variable, and that for every i, j , at thebeginning of time step ij it is already known whether ij ∈ R ( e ) or not). Observe that for each ij ∈ R ( e ), at time step ij we try to assign e with the color ij , with probability q ij , independentlyat random. Let γ > e not being colored with any color, conditioned on R ( e ), is 1 − q e := Q ij ∈ R ( e ) (1 − q ij ), itfollows that Pr [ e ∈ E ( H ′ )] = q e ≤ (1 + γ ) P ij ∈ R ( e ) q ij (here we use the fact that ( N ℓ ) p = o (1), as p ≤ log k n/n k − ). Therefore, in order to show that one can generate H ′ ⊆ H , all we need to showis that by following our algorithm, whp we have q e ≤ (1 − ε/ p for every e . This is done in thefollowing (quite) technical claim. Claim 5.6.
With probability − n − ω (1) we have that q e ≤ (1 − ε/ p for every e ∈ V × . . . × V k .Proof. For every 0 ≤ i ≤ N − e ∈ V × . . . × V k consider the random variable ω i ( e ) = X j : ij ∈ R ( e ) q ij , and observe that q e ≤ (1 + γ ) X i ω i ( e ) . Moreover, by the description of the algorithm it follows that q ij ≤ δ/ ( αn ) k − = δ log k − n/n k − for every i and j , and therefore (deterministically) we have ω i ( e ) ≤ ℓδ log k − n/n k − ≤ log k − n/n k − . In order to complete the proof we need to show two things. First, we show that E ( ω i ( e )) ≤ (1 − γ ) n − ( k − (and therefore, we obtain E ( q e ) ≤ (1 − γ ) n − ( k − N ≤ (1 − ε ) p ). Then, usingstandard concentration bounds, we show that with probability 1 − n − ω (1) we have q e ≤ (1 − ε/ p . Estimating E ( ω i ( e )) . Note that ij ∈ R ( e ) if and only if e is relevant at time step ij , and thatby Claim 5.4 we observe that at each time step ij of the algorithm, any vertex v ∈ U m is being“matched” with probability δ − β , where vertices from different U m -s are independent. Therefore, ateach time step, the probability for a relevant e to stay relevant is (1 − δ + β ) k , and the probability fornot staying relevant is 1 − (1 − δ + β ) k , which is roughly k ( δ − β ) (recall that δ is sufficiently small).Now, for each j ≤ ℓ −
1, let us denote by A j the event “ j is the maximal index for which e isrelevant at time step ij ”, and observe that E ( ω i ( e )) = ℓ − X j =0 Pr [ A j ] j X s =0 q is = ℓ − X s =0 q is ℓ − X j = s Pr [ A j ] . (2)9ote that P ℓ − j = s Pr [ A j ] is the probability for an edge to be relevant at least s steps, and thereforeis equal to (1 − δ + β ) ks . Combining it with (2), we get that E ( ω i ( e )) = ℓ − X s =0 q is (1 − δ + β ) ks . Recalling that q is = δ/n k − s , and that n s = (1 − δ + β ) s n (Observation 5.2), we obtain that E ( ω i ( e )) = ℓ − X s =0 δ (1 − δ + β ) s ( k − n k − (1 − δ + β ) sk = δn k − ℓ − X s =0 (1 − δ + β ) s = δn k − − (1 − δ + β ) ℓ δ − β . (The second equality is just the sum of a geometric series.)Now, since δ is a sufficiently small constant, since β/δ tends to zero with δ , and by the way wechose ℓ , we obtain that the right hand side in the above equality is at most (1 − γ ) n − ( k − . Therefore,we obtain that E ( q e ) ≤ (1 + γ ) X i E ( ω i ( e )) ≤ (1 − γ ) n − ( k − N ≤ (1 − ε ) p. Showing that q e ≤ p with a sufficient probability. Consider the random variables ω i ( e ),0 ≤ i ≤ N − ≤ ω i ( e ) ≤ log k − n/n k − for every i . Now, let ω ∗ = P N − i =0 ω i ( e ) and recall that E ( ω ∗ ) ≤ (1 − ε ) p . Byapplying Theorem 4.2 to ω ∗ we obtainPr [ | ω ∗ − E ( ω ∗ ) | ≥ εp/ ≤ (cid:18) − . ε p N log k − nn − k − (cid:19) ≤ (cid:18) − . ε pn k − (1 − ε ) log k − (cid:19) = n − ω (1) . Taking the union bound over all possible e ∈ V × . . . × V k we obtain the desired. This completesthe proof of the claim. In this phase, we want to show that one can complete each of the M i s from Phase 1. into aperfect matching in an edge-disjoint way. To this end, let U i , 0 ≤ i ≤ N − M i , and let q = log n/n k − . Observe that | U i | = kαn = kn/ log n ,and that q = ω (log | U i | / | U i | k − ). For every i , let us expose all the k -tuples e ∈ U i , with probability q , independently at random, and denote the resulting graph as H i . Clearly, H i = H k | U i | ,q . Now, byapplying Theorem 4.1 to H i and by taking the union bound over all i , it follows that H i containsa perfect matching Q i for all i . Let M i := M i ∪ Q i , and observe that each of the M i s is a perfectmatching of H .In order to complete the proof, we need to show:10. all the M i s are edge-disjoint, and2. no edge accumulated a weight of more than εp/ . follows in a similar way as in Claim 5.3. For 2 . , note that by Claim 5.5 we have that no edgebelongs to more than 2 α k N many U i s. Therefore, since we expose edges of U i s with probability q ,every edge accumulates a weight of at most 2 α k N q = 2 n k − p log n/ log k n k − = o ( p ), as desired.This completes the proof of Theorem 5.1. In this section we show how to derive Theorem 1.4 from Theorem 5.1.Let p ≥ log k n/n k − and let r ∈ N be an integer for which log k nn k − ≤ p/r ≤ log k nn k − . Now,expose the edges of H kn × k,p , and for each exposed edge, immediately assign with a color from [ r ],independently, uniformly at random.Observe that for each color class i ∈ [ r ], the corresponding hypergraph H i is distributed as H kn × k,p/r and that for every i = j , E ( H i ) ∩ E ( H j ) = ∅ . Therefore, by applying Theorem 5.1 to H i , with probability 1 − o (1) H i contains (1 − o (1)) n k − p/r edge-disjoint perfect matchings. UsingMarkov’s inequality we obtain that whp for r − o ( r ) hypergraphs H i , the above holds, and therefore, H = ∪ H i contains (1 − o (1) rn k − p/r = (1 − o (1)) n k − p edge-disjoint perfect matchings as desired. Acknowledgment.
We would like to thank the referees for many valuable comments.
References [1] M. Ajtai, J. Komlos, and E. Szemer´edi,
A dense infinite Sidon sequence,
Eur J Combinat 2(1981), 111. MR0611925 (83f:10056)[2] B. Bollob´as. The evolution of random graphs.
Transactions of the American MathematicalSociety , 286(1):257–274, 1984.[3] B. Bollob´as.
Random graphs . Springer, 1998.[4] P. Erd˝os and A. R´enyi. On the evolution of random graphs.
Magyar Tud. Akad. Mat. Kutat´oInt. K¨ozl. , 5:17–61, 1960.[5] A. Ferber and W. Samotij,
Packing trees of unbounded degrees in random graphs , arXiv preprint.[6] A. Frieze and M. Krivelevich,
Packing hamilton cycles in random and pseudo-random hyper-graphs , Random Structures and Algorithms, 41(1) (2012), 1–22.[7] S. Gerke, and A. McDowell,
Nonvertex-Balanced Factors in Random Graphs , Journal of GraphTheory 78.4 (2015) 269–286.[8] S. Janson, T. Luczak and A. Rucinski,
Random graphs , John Wiley & Sons, 2000.[9] A. Johansson, J. Kahn and V. Vu, Factors in random graphs, Random Structures and Algo-rithms, 33(2008) (1), 1–28.[10] P. Keevash, The existence of designs, arXiv preprint arXiv:1401.3665 (2014).1111] C. McDiarmid, Concentration,
Probabilistic methods for algorithmic discrete mathe-matics . Springer Berlin Heidelberg, 1998. 195–248.[12] C. McDiarmid and B. Reed,
Concentration for self-bounding functions and an inequality ofTalagrand , Random Structures and Algorithms, 29(4) (2016), 549–557.[13] M. Molloy, and B. Reed,
Graph Colouring and the Probabilistic Method . Springer, Berlin,2002.[14] L. P´osa. Hamiltonian circuits in random graphs.
Discrete Mathematics , 14(4):359–364, 1976.[15] V. R¨odl, On a packing and covering problem, European Journal of Combinatorics 6, no. 1(1985), 69–78.[16] W. Samotij,
Stability results for random discrete structures , Random Structures and Algorithms44.3 (2014): 269-289.[17] D. B. West,