Thresholds versus fractional expectation-thresholds
Keith Frankston, Jeff Kahn, Bhargav Narayanan, Jinyoung Park
TTHRESHOLDS VERSUS FRACTIONAL EXPECTATION-THRESHOLDS
KEITH FRANKSTON, JEFF KAHN, BHARGAV NARAYANAN, AND JINYOUNG PARKA
BSTRACT . Proving a conjecture of Talagrand, a fractional version of the “expectation-threshold” conjecture ofKalai and the second author, we show that for any increasing family F on a finite set X , we have p c ( F ) = O ( q f ( F ) log (cid:96) ( F )) , where p c ( F ) and q f ( F ) are the threshold and “fractional expectation-threshold” of F , and (cid:96) ( F ) is the maximum size of a minimal member of F . This easily implies several heretofore difficult results andconjectures in probabilistic combinatorics, including thresholds for perfect hypergraph matchings (Johansson–Kahn–Vu), bounded degree spanning trees (Montgomery), and bounded degree graphs (new). We also resolve(and vastly extend) the “axial” version of the random multi-dimensional assignment problem (earlier consideredby Martin–M´ezard–Rivoire and Frieze–Sorkin). Our approach builds on a recent breakthrough of Alweiss, Lovett,Wu and Zhang on the Erd˝os–Rado “Sunflower Conjecture.”
1. I
NTRODUCTION
Our most important contribution here is the proof of a conjecture of Talagrand [28] that is a fractionalversion of the “expectation-threshold” conjecture of Kalai and the second author [17]. For an increasingfamily F on a finite set X , we write (with definitions below) p c ( F ) , q f ( F ) and (cid:96) ( F ) for the threshold,fractional expectation-threshold, and size of a largest minimal element of F . In this language, our mainresult is the following. Theorem 1.1.
There is a universal K such that for every finite X and increasing F ⊆ X , p c ( F ) ≤ Kq f ( F ) log (cid:96) ( F ) . As observed below, q f ( F ) is a more or less trivial lower bound on p c ( F ) , and Theorem 1.1 says this boundis never far from the truth. (Apart from the constant K , the upper bound is tight in many of the mostinteresting cases.)Thresholds have been a—maybe the —central concern of the study of random discrete structures (randomgraphs and hypergraphs, for example) since its initiation by Erd˝os and R´enyi [7], with much work aroundidentifying thresholds for specific properties (see [4, 14]), though it was not observed until [3] that every increasing F admits a threshold (in the Erd˝os–R´enyi sense; see below). See also [11] for developments,since [10], on the very interesting question of sharpness of thresholds.Our second main result is Theorem 1.7 below, which was motivated by work of Frieze and Sorkin [12]on the “random multi-dimensional assignment problem.” The statement is postponed until we have filledin some background, to which we now turn. (See the beginning of Section 2 for notation not defined here.) Thresholds.
For a given X and p ∈ [0 , , µ p is the product measure on X given by µ p ( S ) = p | S | (1 − p ) | X \ S | .An F ⊆ X is increasing if B ⊇ A ∈ F ⇒ B ∈ F . If this is true (and F (cid:54) = 2 X , ∅ ), then µ p ( F )(:= (cid:80) { µ p ( S ) : S ∈ F} ) is strictly increasing in p , and the threshold , p c ( F ) , is the unique p for which µ p ( F ) = 1 / . This Mathematics Subject Classification.
Primary 05C80; Secondary 60C05, 82B26, 06E30. a r X i v : . [ m a t h . C O ] D ec s finer than the original Erd˝os–R´enyi notion, according to which p ∗ = p ∗ ( n ) is a threshold for F = F n if µ p ( F ) → when p (cid:28) p ∗ and µ p ( F ) → when p (cid:29) p ∗ . (That p c ( F ) is always an Erd˝os–R´enyi thresholdfollows from [3].)Following [25, 26, 28], we say F is p - small if there is a G ⊆ X such that F ⊆ (cid:104)G(cid:105) := { T : ∃ S ∈ G , S ⊆ T } and (cid:80) S ∈G p | S | ≤ / . (1)Then q ( F ) := max { p : F is p -small } , which we call the expectation-threshold of F (note the term is usedslightly differently in [17]), is a trivial lower bound on p c ( F ) , since for G as above and T drawn from µ p , µ p ( F ) ≤ µ p ( (cid:104)G(cid:105) ) ≤ (cid:80) S ∈G µ p ( T ⊇ S ) = (cid:80) S ∈G p | S | (= E [ |{ S ∈ G : S ⊆ T }| ]) . (2)The following statement, the main conjecture (Conjecture 1) of [17], says that for any F , this trivial lowerbound on p c ( F ) is close to the truth. Conjecture 1.2.
There is a universal K such that for every finite X and increasing F ⊆ X , p c ( F ) ≤ Kq ( F ) log | X | . We should emphasize how strong this is (from [17]: “It would probably be more sensible to conjecturethat it is not true”). For example, it easily implies—and was largely motivated by—Erd˝os–R´enyi thresholdsfor (a) perfect matchings in random r -uniform hypergraphs, and (b) appearance of a given bounded degreespanning tree in a random graph. These have since been resolved: the first— Shamir’s Problem , circa 1980—in [15], and the second—a mid-90’s suggestion of the second author—in [23]. Both arguments are difficultand specific to the problems they address (e.g. they are utterly unrelated either to each other or to what wedo here). See Section 7 for more on these and other consequences.Talagrand [25, 28] suggests relaxing “ p -small” by replacing the set system G above by what we may thinkof as a fractional set system, g : say F is weakly p -small if there is a g : 2 X → R + such that (cid:80) S ⊆ T g ( S ) ≥ ∀ T ∈ F and (cid:80) S ⊆ X g ( S ) p | S | ≤ / . Then q f ( F ) := max { p : F is weakly p -small } , the fractional expectation-threshold of F , satisfies q ( F ) ≤ q f ( F ) ≤ p c ( F ) (3)(the first inequality is trivial and the second is similar to (2)), and Talagrand [28, Conjectures 8.3 and 8.5]proposes a sort of LP relaxation of Conjecture 1.2, and then a strengthening thereof. The first of these, thefollowing, replaces q by q f in Conjecture 1.2; the second, which adds replacement of | X | by the smaller (cid:96) ( F ) ,is our Theorem 1.1. Conjecture 1.3.
There is a universal K such that for every finite X and increasing F ⊆ X , p c ( F ) ≤ Kq f ( F ) log | X | . Talagrand further suggests the following “very nice problem of combinatorics,” which implies equiva-lence of
Conjectures 1.2 and 1.3, as well as of Theorem 1.1 and the corresponding strengthening of Conjec-ture 1.2.
Conjecture 1.4.
There is a universal K such that, for any increasing F on a finite set X , q ( F ) ≥ q f ( F ) /K. That is, weakly p -small implies ( p/K ) -small.)Note the interest here is in Conjecture 1.4 for its own sake and as the most likely route to Conjecture 1.2;all applications of the latter that we’re aware of follow just as easily from Theorem 1.1. Spread hypergraphs and spread measures.
In this paper a hypergraph on the ( vertex ) set X is a collection H of subsets of X ( edges of H ), with repeats allowed . For S ⊆ X , we use (cid:104) S (cid:105) for { T ⊆ X : T ⊇ S } , and for ahypergraph H on X , we write (cid:104)H(cid:105) for ∪ S ∈H (cid:104) S (cid:105) . We say H is (cid:96) - bounded (resp. (cid:96) - uniform or an (cid:96) -graph ) if eachof its members has size at most (resp. exactly) (cid:96) , and κ - spread if |H ∩ (cid:104) S (cid:105)| ≤ κ −| S | |H| ∀ S ⊆ X. (4)(Note that edges are counted with multiplicities on both sides of (4).)A major advantage of the fractional versions (Conjecture 1.3 and Theorem 1.1) over Conjecture 1.2—andthe source of the present relevance of [2]—is that they admit, via linear programming duality, reformula-tions in which the specification of q f ( F ) gives a usable starting point. Following [28], we say a probabilitymeasure ν on X is q - spread if ν ( (cid:104) S (cid:105) ) ≤ q | S | ∀ S ⊆ X. Thus a hypergraph H is κ -spread iff uniform measure on H is q -spread with q = κ − .As observed by Talagrand [28], the following is an easy consequence of duality. Proposition 1.5.
For an increasing family F on X , if q f ( F ) ≤ q , then there is a (2 q ) -spread probability measure on X supported on F . (cid:3) This allows us to reduce Theorem 1.1 to the following alternate (actually, equivalent) statement. In thispaper with high probability (w.h.p.) means with probability tending to 1 as (cid:96) → ∞ . Theorem 1.6.
There is a universal K such that for any (cid:96) -bounded, κ -spread hypergraph H on X , a uniformly random (( Kκ − log (cid:96) ) | X | ) -element subset of X belongs to (cid:104)H(cid:105) w.h.p. The easy reduction is given in Section 2.
Assignments.
Our second main result provides upper bounds on the minima of a large class of hypergraph-based stochastic processes, somewhat in the spirit of [27] (see also [26, 29]), saying that in “smoother”settings, the logarithmic corrections of Conjecture 1.3 and Theorem 1.1 are not needed.For a hypergraph H on X , let ξ x ( x ∈ X ) be independent random variables, each uniform from [0 , , andset ξ H = min S ∈H (cid:88) x ∈ S ξ x (5)and Z H = E [ ξ H ] . Theorem 1.7.
There is a universal K such that for any (cid:96) -bounded, κ -spread hypergraph H , we have Z H ≤ K(cid:96)/κ ,and ξ H ≤ K(cid:96)/κ w.h.p.
These bounds are often tight (again up to the value of K ). The distribution of the ξ x ’s is not very important;e.g. it’s easy to see that the same statement holds if they are Exp(1) random variables, as in the next example. heorem 1.7 was motivated by work of Frieze and Sorkin [12] on the “axial” version of the random d-dimensional assignment problem . This asks (for fixed d and large n ) for estimation of Z Ad ( n ) = E (cid:34) min (cid:88) x ∈ S ξ x (cid:35) , (6)where the ξ x ’s ( x ∈ X := [ n ] d ) are independent Exp(1) weights and S ranges over “axial assignments,”meaning S ⊆ X meets each axis-parallel hyperplane ( { x ∈ X : x i = a } for some i ∈ [ d ] and a ∈ [ n ] ) exactlyonce. For d = 2 this is classical; see [12] for its rather glorious history. For d = 3 the deterministic versionwas one of Karp’s [18] original NP-complete problems. Progress on the random version has been limited;see [12] for a guide to the literature.Frieze and Sorkin show (regarding bounds; they are also interested in algorithms) that for suitable c > and c , c n − ( d − < Z Ad ( n ) < c n − ( d − log n. (7)(The lower bound is easy and the upper bound follows from the Shamir bound of [15].)In present language, Z Ad ( n ) is essentially (that is, apart from the difference in the distributions of the ξ x ’s) Z H , with H the set of perfect matchings of the complete, balanced d -uniform d -partite hypergraph on dn vertices (that is, the collection of d -sets meeting each of the pairwise disjoint n -sets V , . . . , V d ). This is easilyseen to be κ -spread with κ = ( n/e ) d − (apart from the nearly irrelevant d -particity, it is the H of Shamir’sProblem), so the correct bound is an instance of Theorem 1.7: Corollary 1.8. Z Ad ( n ) = Θ( n − ( d − ) . Frieze and Sorkin also considered the “planar” version of the problem, in which S in (6) meets each line ( { x ∈ X : x j = y j ∀ j (cid:54) = i } for some i ∈ [ d ] and y ∈ X ) exactly once; and one may of course generalisefrom hyperplanes/lines to k -dimensional “subspaces” for a given k ∈ [ d − . It’s easy to see what to expecthere, and one may hope Theorem 1.7 will eventually apply, but we at present lack the technology to say therelevant H ’s are suitably spread (see Section 8). Organisation.
Section 2 includes minor preliminaries and the derivation of Theorem 1.1 from Theorem 1.6.The heart of our argument, Lemma 3.1, is proved in Section 3; our approach here strengthens that of therecent breakthrough of Alweiss, Lovett, Wu and Zhang [2] on the Erd˝os–Rado “Sunflower Conjecture” [6].Section 4 adds one small technical point (more or less repeated from [2]), and the proofs of Theorems 1.6and 1.7 are given in Sections 5 and 6. Finally, Section 7 outlines a few applications and Section 8 discussesunresolved questions. 2. L
ITTLE THINGS
Usage.
As is usual, we use [ n ] for { , , , . . . , , n } , X for the power set of X , (cid:0) Xr (cid:1) for the family of r -elementsubsets of X , and [ S, T ] for { R : S ⊆ R ⊆ T } . Our default universe is X , with | X | = n .In what follows we assume (cid:96) and n are somewhat large (when there is an (cid:96) it will be at most n ), as wemay do since smaller values can by handled by adjusting the K ’s in Theorems 1.6 and 1.7. Asymptoticnotation referring to some parameter λ (usually (cid:96) ) is used in the natural way: implied constants in O ( · ) and Ω( · ) are independent of λ , and f = o ( g ) (also written f (cid:28) g ) means f /g is smaller than any given ε > forlarge enough values of λ . Following a standard abuse, we usually pretend large numbers are integers. or p ∈ [0 , and m ∈ [ n ] , X p and X m are (respectively) a p -random subset of X (drawn from µ p ) and auniformly random m -element subset of X . The latter is not entirely kosher, since we will also see sequences X i ; but we will never see both interpretations in close proximity, and the overlap should cause no confusion.In a couple places it will be helpful to assume uniformity, which we will justify using the next little point. Observation 2.1. If H is (cid:96) -bounded and κ -spread, and we replace each S ∈ H by M new edges, each consisting of S plus (cid:96) − | S | new vertices (each used just once), then for large enough M the resulting (cid:96) -graph G is again κ -spread.Derivation of Theorem 1.1 from Theorem 1.6. Let F be as in Theorem 1.1 with G its set of minimal elements,let (cid:96) with (cid:96) ( F ) ≤ (cid:96) = O ( (cid:96) ( F )) be large enough that the exceptional probability in Theorem 1.6 is less than1/4 and let ν be the (2 q ) -spread probability measure promised by Proposition 1.5, where q = q f ( F ) . Wemay assume ν is supported on G (since transferring weight from S to T ⊆ S doesn’t destroy the spreadcondition) and that ν takes values in Q . We may then replace G by H whose edges are copies of edges of G ,and ν by uniform measure on H .Setting m = ((2 Kq log (cid:96) ) n ) and p = 2 m/n (with n = | X | and K as in Theorem 1.6), we then have (usingTheorem 1.6 with κ = 1 / (2 q ) ) µ p ( F ) ≥ P ( X p ∈ (cid:104)H(cid:105) ) ≥ P ( | X p | ≥ m ) P ( X m ∈ (cid:104)H(cid:105) ) ≥ P ( | X p | ≥ m ) / > / , implying p c ( F ) < p = 4 Kq log (cid:96) . (Note H q -spread with ∅ (cid:54)∈ H implies q ≥ /n , so that m is somewhat largeand P ( | X p | ≥ m ) ≈ .) (cid:3) Remark 2.2.
This was done fussily to cover smaller (cid:96) in Theorem 1.1; if (cid:96) → ∞ , then it gives P ( X p ∈ (cid:104)H(cid:105) ) → .
3. M
AIN L EMMA
Let γ be a slightly small constant (e.g. γ = 0 . suffices), and let C be a constant large enough to supportthe estimates that follow. Let H be an r -bounded, κ -spread hypergraph on a set X of size n , with r, κ ≥ C .Set p = C/κ with C ≤ C ≤ κ/C (so p ≤ /C ), r (cid:48) = (1 − γ ) r and N = (cid:0) nnp (cid:1) . Finally, fix ψ : (cid:104)H(cid:105) → H satisfying ψ ( Z ) ⊆ Z for all Z ∈ (cid:104)H(cid:105) ; set, for W ⊆ X and S ∈ H , χ ( S, W ) = ψ ( S ∪ W ) \ W ; and say the pair ( S, W ) is bad if | χ ( S, W ) | > r (cid:48) and good otherwise.The heart of our argument is the following lemma (an improvement of [2, Lemma 5.7]), regarding whicha little orientation may be helpful. We will (in Theorems 1.6 and 1.7) be choosing a random subset of X insmall increments and would like to say we are likely to be making good progress toward containing some S ∈ H . Of course such progress is not to be expected for a typical S , but this is not the goal: having chosena portion W of our eventual set, we just need the remainder to contain some S \ W , and may focus on thosethat are more likely (basically meaning small). The key idea (introduced in [2] and refined here) is that ageneral S \ W , while not itself small, will, in consequence of the spread assumption, typically contain somesmall S (cid:48) \ W . (In fact χ ( S, W ) will usually be one of these: an S (cid:48) \ W contained in S \ W will typically besmall, so we don’t need to steer this choice.) We then replace each “good” S \ W by χ ( S, W ) and iterate, asecond nice feature of the spread condition being that it is not much affected by this substitution. Lemma 3.1.
For H as above, and W chosen uniformly from (cid:0) Xnp (cid:1) , E [ |{ S ∈ H : ( S, W ) is bad }| ] ≤ |H| C − r/ . roof. It is enough to show, for s ∈ ( r (cid:48) , r ] , E [ |{ S ∈ H : ( S, W ) is bad and | S | = s }| ] ≤ ( γr ) − |H| C − r/ , (8)or, equivalently, that |{ ( S, W ) : (
S, W ) is bad and | S | = s }| ≤ ( γr ) − N |H| C − r/ . (9)(Note γr = r − r (cid:48) bounds the number of s for which the set in question can be nonempty, whence thenegligible factors ( γr ) − .)We now use H s = { S ∈ H : | S | = s } . Let B = √ C and for Z ⊇ S ∈ H s say ( S, Z ) is pathological if there is T ⊆ S with t := | T | > r (cid:48) and |{ S (cid:48) ∈ H s : S (cid:48) ∈ [ T, Z ] }| > B r |H| κ − t p s − t . (10)From now on we will always take Z = W ∪ S (with W as in Lemma 3.1); thus | Z | is typically roughly np and, since H is κ -spread, |H| κ − t p s − t is a natural upper bound on what one might expect for the l.h.s. of(10).Note that in proving (9) we may assume s ≤ n/ : we may of course assume |H s | is at least the r.h.s. of(8); but then for an S ∈ H s of largest multiplicity, say m , we have m ≤ κ − s |H| ≤ κ − s γrC r/ |H s | ≤ κ − s γrC r/ m n , which is less than m if s > n/ (since κ > C ).We bound the nonpathological and pathological parts of (9) separately; this (with the introduction of“pathological”) is the source of our improvement over [2]. Nonpathological contributions.
We first bound the number of ( S, W ) in (9) with ( S, Z ) nonpathological.This basically follows [2], but “nonpathological” allows us to bound the number of possibilities in Step 3below by the r.h.s. of (10), where [2] settles for something like |H| κ − t . Step 1.
There are at most s (cid:88) i =0 (cid:18) nnp + i (cid:19) ≤ (cid:18) n + snp + s (cid:19) ≤ N p − s (11)choices for Z = W ∪ S . Step 2.
Given Z , let S (cid:48) = ψ ( Z ) . Choose T := S ∩ S (cid:48) , for which there are at most | S (cid:48) | ≤ r possibilities, andset t = | T | > r (cid:48) . (If t ≤ r (cid:48) then ( S, W ) cannot be bad, as χ ( S, W ) = S (cid:48) \ W ⊆ T .) Step 3.
Since we are only interested in nonpathological choices, the number of possibilities for S is now atmost B r |H| κ − t p s − t . Step 4.
Complete the specification of ( S, W ) by choosing W ∩ S , the number of possibilities for which is atmost s .In sum, since s ≤ r and t > r (cid:48) = (1 − γ ) r , the number of nonpathological possibilities is at most r + s N |H| B r ( pκ ) − t ≤ N |H| (4 B ) r C − t < N |H| [4 BC − (1 − γ ) ] r . (12) athological contributions. We next bound the number of ( S, W ) as in (9) with ( S, Z ) pathological. Themain point here is Step 4. Step 1.
There are at most |H| possibilities for S . Step 2.
Choose T ⊆ S witnessing the pathology of ( S, Z ) (i.e. for which (10) holds); there are at most s possibilities for T . Step 3.
Choose U ∈ [ T, S ] for which |H s ∩ [ U, ( Z \ S ) ∪ U ] | > − ( s − t ) B r |H| κ − t p s − t . (13)(Here the left hand side counts members of H s in Z whose intersection with S is precisely U . Of course,existence of U as in (13) follows from (10).) The number of possibilities for this choice is at most s − t . Step 4.
Choose Z \ S , the number of choices for which is less than N (2 /B ) r . To see this, write Φ for the r.h.s.of (13). Noting that Z \ S must belong to (cid:0) X \ Snp (cid:1) ∪ (cid:0) X \ Snp − (cid:1) ∪ · · · ∪ (cid:0) X \ Snp − s (cid:1) , we consider, for Y drawn uniformlyfrom this set, P ( |H s ∩ [ U, Y ∪ U ] | > Φ) . (14)Set | U | = u . We have |H s ∩ (cid:104) U (cid:105)| ≤ |H ∩ (cid:104) U (cid:105)| ≤ |H| κ − u , while, for any S (cid:48) ∈ H s ∩ (cid:104) U (cid:105) , P ( Y ⊇ S (cid:48) \ U ) ≤ (cid:18) npn − s (cid:19) s − u (of course if S (cid:48) ∩ S (cid:54) = U the probability is zero); so ϑ := E [ |H s ∩ [ U, Y ∪ U ] | ] ≤ |H| κ − u (cid:18) npn − s (cid:19) s − u ≤ |H| κ − u (2 p ) s − u (since n − s ≥ n/ ). Markov’s Inequality then bounds the probability in (14) by ϑ/ Φ , and this bounds thenumber of possibilities for Z \ S by N ( ϑ/ Φ) ( cf. (11)), which is easily seen to be less than N (2 /B ) r . Step 5.
Complete the specification of ( S, W ) by choosing S ∩ W , which can be done in at most s ways.Combining (and slightly simplifying), we find that the number of pathological possibilities is at most |H| N (16 /B ) r . (15)Finally, the sum of the bounds in (12) and (15) is less than the ( γr ) − N |H| C − r/ of (9). (cid:3)
4. S
MALL UNIFORMITIES
As in [2, Lemma 5.9], very small set sizes are handled by a simple Janson bound:
Lemma 4.1.
For an r -bounded, κ -spread G on Y , and α ∈ (0 , , P ( Y α (cid:54)∈ (cid:104)G(cid:105) ) ≤ exp − (cid:32) r (cid:88) t =1 (cid:18) rt (cid:19) ( ακ ) − t (cid:33) − . (16) roof. We may assume G is r -uniform, since modifying it according to Observation 2.1 doesn’t decrease theprobability in (16). Denote members of G by S i and set ζ i = { Y α ⊇ S i } . Then µ := (cid:88) E [ ζ i ] = |G| α r and Λ := (cid:88) (cid:88) { E [ ζ i ζ j ] : S i ∩ S j (cid:54) = ∅} ≤ |G| r (cid:88) t =1 (cid:18) rt (cid:19) κ − t |G| α r − t = µ r (cid:88) t =1 (cid:18) rt (cid:19) ( ακ ) − t (where the inequality holds because G is κ -spread), and Janson’s Inequality (e.g. [14, Thm. 2.18(ii)]) boundsthe probability in (16) by exp[ − µ / Λ] . (cid:3) Corollary 4.2.
Let G be as in Lemma 4.1, let t = α | Y | be an integer with ακ ≥ r , and let W = Y t . Then P ( W (cid:54)∈ (cid:104)G(cid:105) ) ≤ − ακ/ (2 r )] . Proof.
Lemma 4.1 gives exp[ − ακ/ (2 r )] ≥ P ( Y α (cid:54)∈ (cid:104)G(cid:105) ) ≥ P ( | Y α | ≤ t ) P ( W (cid:54)∈ (cid:104)G(cid:105) ) ≥ P ( W (cid:54)∈ (cid:104)G(cid:105) ) / , where we use the fact that any binomial ξ with E [ ξ ] ∈ Z satisfies P ( ξ ≤ E [ ξ ]) ≥ / ; see e.g. [22]. (cid:3)
5. P
ROOF OF T HEOREM H is (2 κ ) -spread. Let γ and C beas in Section 3 and H as in the statement of Theorem 1.6, and recall that asymptotics refer to (cid:96) . We may ofcourse assume that κ ≥ γ − C log (cid:96) (or the result is trivial with a suitably adjusted K ).Fix an ordering “ ≺ ” of H . In what follows we will have a sequence H i , with H = H and H i ⊆ { χ i ( S, W i ) : S ∈ H i − } , where W i and χ i will be defined below (with χ i a version of the χ of Section 3). We then order H i by setting χ i ( S, W i ) ≺ i χ i ( S (cid:48) , W i ) ⇔ S ≺ i − S (cid:48) . (So each member of H i ultimately inherits its position in ≺ i from some member of H . This is not veryimportant: we will be applying Lemma 3.1 repeatedly, and the present convention just provides a concrete ψ for each stage of the iteration.)Set C = C and p = C/κ , define m by (1 − γ ) m = √ log (cid:96)/(cid:96) , and set q = log (cid:96)/κ . Then γ − log (cid:96) ∼ m ≤ γ − log (cid:96) and Theorem 1.6 will follow from the next assertion. Claim 5.1.
If W is a uniform (( mp + q ) n ) -subset of X , then W ∈ (cid:104)H(cid:105) w.h.p.Proof. Set δ = 1 / (2 m ) . Let r = (cid:96) and r i = (1 − γ ) r i − = (1 − γ ) i r for i ∈ [ m ] . Let X = X and, for i = 1 , . . . , m , let W i be uniform from (cid:0) X i − np (cid:1) and set X i = X i − \ W i . (Note the assumption κ ≥ γ − C log (cid:96) ensures | X m | ≥ n/ .)For S ∈ H i − let χ i ( S, W i ) = S (cid:48) \ W i , where S (cid:48) is the first member of H i − contained in W i ∪ S (with H i − ordered by ≺ i − ). Say S is good if | χ i ( S, W i ) | ≤ r i (and bad otherwise), and set H i = { χ i ( S, W i ) : S ∈ H i − is good } . Thus H i is an r i -bounded collection of subsets of X i and inherits the ordering ≺ i as described above. inally, choose W m +1 uniformly from (cid:0) X m nq (cid:1) . Then W := W ∪ · · · ∪ W m +1 is as in Claim 5.1. Note alsothat W ∈ (cid:104)H(cid:105) whenever W m +1 ∈ (cid:104)H m (cid:105) . (More generally, W ∪ · · · ∪ W i ∪ Y ∈ (cid:104)H(cid:105) whenever Y ⊆ X i lies in (cid:104)H i (cid:105) .)So to prove the claim, we just need to show P ( W m +1 ∈ (cid:104)H m (cid:105) ) = 1 − o (1) (17)(where the P refers to the entire sequence W , . . . , W m +1 ).For i ∈ [ m ] call W i successful if |H i | ≥ (1 − δ ) |H i − | , call W m +1 successful if it lies in (cid:104)H m (cid:105) , and say asequence of W i ’s is successful if each of its entries is. We show a little more than (17): P ( W , . . . , W m +1 is successful ) = 1 − exp (cid:104) − Ω( (cid:112) log (cid:96) ) (cid:105) . (18)For i ∈ [ m ] , according to Lemma 3.1 (and Markov’s Inequality), P ( W i is not successful | W , . . . , W i − is successful ) < δ − C − r i − / , since W , . . . , W i − successful implies |H i − | > (1 − δ ) m |H| > |H| / , which, since |H i − ∩ (cid:104) I (cid:105)| ≤ |H ∩ (cid:104) I (cid:105)| and we assume H is (2 κ ) -spread), gives the spread condition (4) for H i − . Thus P ( W , . . . , W m is successful ) > − δ − m (cid:88) i =1 C − r i − / > − exp (cid:104) − (cid:112) log (cid:96) (cid:105) (19)(using r m = √ log (cid:96) ).Finally, if W , . . . , W m is successful, then Corollary 4.2 (applied with G = H m , Y = X m , α = nq/ | Y | ≥ q , r = r m , and W = W m +1 ) gives P ( W m +1 (cid:54)∈ (cid:104)H m (cid:105) ) ≤ (cid:104) − (cid:112) log (cid:96)/ (cid:105) , (20)and we have (18) and the claim. (cid:3)
6. P
ROOF OF T HEOREM γ and C as in Section 3 and κ ≥ C (or there is nothing toprove). We may assume H is (cid:96) -uniform, since the construction of Observation 2.1 produces an (cid:96) -uniform, κ -spread G with ξ G ≥ ξ H . In particular this gives |H| (cid:96) = (cid:88) x ∈ X |H ∩ (cid:104) x (cid:105)| ≤ nκ − |H| . (21)We first assume κ is slightly large, precisely κ ≥ log (cid:96) ; (22)the similar but easier argument for smaller values will be given at the end. (The bound in (22) is convenientbut there is nothing delicate about this choice.) Claim 6.1.
For κ as in (22) and C ≤ C ≤ γκ/ (4 log (cid:96) ) , P ( ξ H > (3 C/γ ) (cid:96)/κ ) < exp[ − (log (cid:96) log C ) / . roof of Theorem 1.7 in regime (22) given Claim 6.1. The “w.h.p.” statement is immediate (take C = C ). Forthe expectation, Z H , set t = (3 C /γ ) (cid:96)/κ and T = 3 (cid:96)/ (4 log (cid:96) ) . By Claim 6.1 we have, for all x ∈ [ t, T ] , P ( ξ H > x ) ≤ f ( x ) := exp [ − log (cid:96) log( γκx/ (cid:96) ) /
4] = ( bx ) a = b a x a , where a = − (log (cid:96) ) / and b = γκ/ (cid:96) . Noting that ξ H ≤ (cid:96) , we then have Z H ≤ t + (cid:90) Tt P ( ξ H > x ) dx + (cid:96) P ( ξ H > T ) ≤ t + (cid:90) Tt f ( x ) dx + (cid:96)f ( T ) = O ( (cid:96)/κ ) . Here t = O ( (cid:96)/κ ) and the other terms are much smaller: the integral is less than − / ( a + 1) b a t a +1 = O (1 / log (cid:96) ) C a t , while (22) easily implies that f ( T ) = ( γκ/ (4 log (cid:96) )) a is o (1 /κ ) . (cid:3) Proof of Claim 6.1.
Terms not defined here (beginning with p = C/κ and W i ; note C is now as in Claim 6.1,rather than set to C ) are as in Section 5, but we (re)define m by (1 − γ ) m = log (cid:96)/(cid:96) and set q = log C log (cid:96)/κ ,noting that (21) gives p ≥ C(cid:96)/n .It’s now convenient to generate the W i ’s using the ξ x ’s in the natural way: let a i = (cid:40) ( ip ) n if i ∈ { } ∪ [ m ] , ( mp + q ) n if i = m + 1 ,and let W i consist of the x ’s in positions a i − + 1 , . . . , a i when X is ordered according to the ξ x ’s. Proposition 6.2.
With probability − e − Ω( C(cid:96) ) , ξ x ≤ ε i := (cid:40) ip if i ∈ { } ∪ [ m ]2( mp + q ) if i = m + 1 (cid:41) for all i and x ∈ W i . (23) Proof.
Failure at i ≥ implies | ξ − [0 , ε i ] | < a i . (24)But | ξ − [0 , ε i ] | is binomial with mean ε i n = 2 a i ≥ C(cid:96) , so the probability that (24) occurs for some i is lessthan exp[ − Ω( C(cid:96) )] (see e.g. [14, Theorem 2.1]). (cid:3) We now write W i for W ∪ · · · ∪ W i . Proposition 6.3. If W m +1 ∈ (cid:104)H m (cid:105) , then W contains some S ∈ H with | S \ W i | ≤ r i ∀ i ∈ [ m ] . Proof.
Suppose W ⊇ S m ∈ H m . By construction (of the H i ’s) there are S m − , . . . , S , S =: S with S i ∈ H i and S i = S i − \ W i , whence S i = S \ W i for i ∈ [ m ] ; and S i ∈ H i then gives the proposition. (cid:3) We now define “success” for ( ξ x : x ∈ X ) to mean that W , . . . , W m +1 is successful in our earlier sense and (23) holds. Notice that with our current values of m and q (and r m = (cid:96) (1 − γ ) m = log (cid:96) ), we can replacethe error terms in (19) and (20) by essentially δ − C − log (cid:96)/ and e − log C log (cid:96)/ , which with Proposition 6.2bounds the probability that ( ξ x : x ∈ X ) is not successful by (say) exp[ − (log (cid:96) log C ) / .We finish with the following observation. Proposition 6.4. If ( ξ x : x ∈ X ) is successful then ξ H ≤ (3 C/γ ) (cid:96)/κ . roof. For S as in Proposition 6.3, we have (with W = ∅ and ε = 0 ) ξ H ≤ m +1 (cid:88) i =1 ε i | S ∩ W i | = m +1 (cid:88) i =1 ( ε i − ε i − ) | S \ W i − |≤ (cid:34) m (cid:88) i =1 (1 − γ ) i − p + (1 − γ ) m q (cid:35) (cid:96) ≤ C/ ( γκ ) + (log (cid:96)/(cid:96) )(log C log (cid:96)/κ )] (cid:96) < (3 C/γ ) (cid:96)/κ. (cid:3) This completes the proof of Claim 6.1 (and of Theorem 1.7 when κ satisfies (22)). (cid:3) Finally, for κ below the bound in (22) (actually, for κ up to about (cid:96)/ log (cid:96) ), a subset of the precedingargument suffices. We proceed as before, but now only with C = C (so p = C /κ ), stopping at m de-fined by (1 − γ ) m = 1 /κ (so m ≈ γ − log κ ). The main difference here is that there is no “Janson” phase: W , . . . , W m is successful with probability − exp[ − Ω( (cid:96)/κ )] , and when it is successful we have (as in theproof of Proposition 6.4, now just taking W m +1 = X \ W m ) ξ H ≤ m (cid:88) i =1 ( ε i − ε i − ) | S \ W i − | + | S ∩ W m +1 | < C / ( γκ )) (cid:96) + (cid:96)/κ (so also Z H ≤ O ( (cid:96)/κ ) + exp[ − Ω( (cid:96)/κ )] (cid:96) = O ( (cid:96)/κ ) ).7. A PPLICATIONS
Much of the significance of Theorem 1.1—and of the skepticism with which Conjecture 1.2 was viewedin [17]—derives from the strength of their consequences, a few of which we discuss ( briefly ) here.For this discussion, K rn = (cid:0) Vr (cid:1) is the complete r -graph on V = [ n ] , and H rn,p is the r -uniform counterpartof the usual binomial random graph G n,p . Given r, n and an r -graph H , we use G H for the collection of(unlabeled) copies of H in K rn and F H for (cid:104)G H (cid:105) . As usual, ∆ is maximum degree.As noted earlier, Conjecture 1.2 was motivated especially by Shamir’s Problem (since resolved in [15]),and the conjecture that became Montgomery’s theorem [23]. Very briefly: for n running over multiples of agiven (fixed) r , Shamir’s Problem asks for estimation of p c ( F H ) when H is a perfect matching ( n/r disjointedges), and [15] proves the natural conjecture that this threshold is Θ( n − ( r − log n ) ; and [23] shows thatfor fixed d , the threshold for G n,p to contain a given n -vertex tree with maximum degree d is Θ( n − log n ) ,where the implied constant in the upper bound depends on d (though it probably shouldn’t). See [15, 23] forsome account of the history of these problems. In both cases—and in most of the other examples mentionedfollowing Theorem 7.1 (all but the one from [20])—the lower bounds derive from the coupon-collectorishrequirement that the (hyper)edges cover the vertices, and it is the upper bounds that are of interest.In fact, Theorem 1.1 gives not just Montgomery’s theorem, but its natural extension to r -graphs and more.(Strictly speaking, Montgomery proves more than the original conjecture—see Section 8—and we are notso far recovering this stronger result.) Say an r -graph F is a forest if it contains no cycle , meaning distinctvertices v , . . . , v k and distinct edges e , . . . , e k such that v i − , v i ∈ e i ∀ i (with subscripts mod k ). A spanningtree is then a forest of size ( n − / ( r − . For a (general) r -graph F , let ρ ( F ) be the maximum size of aforest in F and set ϕ ( F ) = max { − ρ ( F (cid:48) ) / | F (cid:48) | : ∅ (cid:54) = F (cid:48) ⊆ F } . heorem 7.1. For each r and c there is a K such that if H is an r -graph on [ n ] with ∆( H ) ≤ d and ϕ ( H ) ≤ c/ log n ,then p c ( F H ) < Kdn − ( r − log | H | . This gives p c ( F H ) = Θ( n − ( r − log n ) if H is a perfect matching (as in Shamir’s Problem), or a “looseHamiltonian cycle” (a result of [5], to which we refer for definitions and history of the problem), and p c ( F H ) < Kdn − ( r − log n if H is a spanning tree with ∆( H ) ≤ d . For fixed d the latter is the aforemen-tioned r -graph generalization of [23] (or a slight improvement thereof in that the dependence on d —which,again, is probably unnecessary—is explicit), and for d = n Ω(1) it is a result of Krivelevich [20, Theorem 1],which is again tight up to the value of K (see [20, Theorem 2]).The last application we discuss here was suggested to us by Simon Griffiths and Rob Morris. Set c d =( d !) / ( d ( d +1)) and p ∗ ( d, n ) = c d n − / ( d +1) (log n ) / ( d ( d +1)) . Theorem 7.2.
For fixed d and H any graph on [ n ] with ∆( H ) ≤ d , p c ( F H ) < (1 + o (1)) p ∗ ( d, n ) . (25)When ( d + 1) | n and H is a K d +1 -factor (that is, n/ ( d + 1) disjoint K d +1 ’s), p ∗ ( d, n ) is the asymptotic valueof p c ( F H ) . Here (25) with O (1) in place of o (1) was proved in [15], while the asymptotics are given bythe combination of [16] and [24, 13]; we state this in a form convenient for use below: Theorem 7.3.
For fixed d and ε > , and n ranging over multiples of d + 1 , if p > (1 + ε ) p ∗ ( d, n ) , then G n,p contains a K d +1 -factor w.h.p. (cid:3) Interest in p c ( F H ) for H as in Theorem 7.2 dates to at least 1992, when Alon and F ¨uredi [1] showed theupper bound O ( n − /d (log n ) /d ) , and has intensified since [15], motivated by the idea that K d +1 -factorsshould be the worst case. See [8, 9] for history and the most recent results; with O (1) in place of o (1) ,Theorem 7.2 is conjectured in [9] and in the stronger “universal” form (see Section 8) in [8].Theorem 7.3 probably extends to r -graphs and d of the form (cid:0) s − r − (cid:1) . This just needs extension of Theorem 1of [24] to r -graphs (suggested at the end of [24]), which (with [16]) would give asymptotics of the thresholdfor H rn,p to contain a K rs -factor (where K rs , recall, is the complete r graph on s vertices).Each of Theorems 7.1 and 7.2 begins with the following easy observations. (The first, an approximateconverse of Proposition 1.5, is the trivial direction of LP duality.) Observation 7.4.
If an increasing F supports a q -spread measure, then q f ( F ) < q . (More precisely, q f ( F ) is the least q such that F supports a probability measure ν with ν ( (cid:104) S (cid:105) ) ≤ q | S | ∀ S .) Observation 7.5.
Uniform measure on G H is q -spread if and only if: for S ⊆ K rn isomorphic to a subhypergraph of H , σ a uniformly random permutation of V , and H ⊆ K rn a given copy of H , P ( σ ( S ) ⊆ H ) ≤ q | S | . (26)Proving Theorem 7.1 is now just a matter of verifying (26) with q = O ( dn − ( r − ) , which we leave to thereader. (It is similar to the proof of (28).) Proof of Theorem 7.2.
The next assertion is the main thing we need to check here. emma 7.6. There is ε = ε d > such that if H is as in Theorem 7.2 and has no component isomorphic to K d +1 ,then q f ( F H ) ≤ n − (2 / ( d +1)+ ε ) =: q. (27) Proof.
We just need to show (26) for q as in (27) and S, H as in Observation 7.5, say with W = V ( S ) , s = | S | , and f the size of a spanning forest of S . We may of course assume S has no isolated vertices, so w := | W | ≤ f . We show P ( σ ( S ) ⊆ H ) < ( e d/n ) f (28)and fs ≥ d + 1)( d + 2) d = 2 d + 1 + ε , (29)where ε = 1 / (( d + 2)( d + 1) d ) , implying that for any ε < ε , (26) holds for large enough n . Proof of (28) . Let α, β : W → V be, respectively, a uniform injection and a uniform map. Then ( d/n ) f ≥ P ( β ( S ) ⊆ H ) ≥ P ( β is injective ) P ( β ( S ) ⊆ H | β is injective )= ( n ) w n − w P ( α ( S ) ⊆ H ) > e − f P ( σ ( S ) ⊆ H ) . (cid:3) Proof of (29) . We may of course assume S is connected, in which case we have f = w − and upper boundson s : (cid:0) w (cid:1) if w ≤ d ; (cid:0) d +12 (cid:1) − if w = d + 1 ; and wd/ if w ≥ d + 2 . The corresponding lower bounds on f /s are /d , d/ (( d + 2)( d + 1) − and d + 1) / (( d + 2) d ) , the smallest of which is the last. (cid:3) This completes the proof of Lemma 7.6. (cid:3)
We are now ready for Theorem 7.2. Let ς = ς n be some slow o (1) (e.g. / log n ). By Theorem 7.3 there is p ∼ p ∗ ( d, n ) such that if ( d + 1) | m > (1 − ς ) n then G m,p contains a K d +1 -factor w.h.p., while by Lemma 7.6and Theorem 1.1 (or, more precisely, Remark 2.2), there is p with p ∗ ( d, n ) (cid:29) p (cid:29) n − (2 / ( d +1)+ ε ) such thatif m ≥ ςn then for any given m -vertex H (cid:48) with ∆( H (cid:48) ) ≤ d , G m,p contains (a copy of) H (cid:48) w.h.p.Let H be the union of the copies of K d +1 in H (each of which must be a component of H ), H = H − H ,and n i = | V ( H i ) | (so n + n = n ). Let G ∼ G n,p and G ∼ G n,p be independent on the common vertexset V = [ n ] and G = G ∪ G . Then G ∼ G n,p with p = 1 − (1 − p )(1 − p ) ∼ p ∗ ( d, n ) , and we just need toshow G ⊇ H w.h.p. In fact we find each H i in the corresponding G i , in order depending on n : if n ≥ ςn ,then w.h.p. G contains H , say on vertex set V , and w.h.p. G [ V \ V ] contains H ; and if n < ςn , thenw.h.p. G contains H on some V , and w.h.p. G [ V \ V ] contains H . (cid:3)
8. C
ONCLUDING REMARKS
In closing we briefly mention (or recall) a few unresolved issues related to the present work. A. First, of course, it would be nice to prove Conjecture 1.4, which is now equivalent to Conjecture 1.2. B. It would be interesting to understand whether, in Shamir’s and related problems, the log (cid:96) emergingfrom our argument somehow reflects the coupon-collector requirement (edges cover vertices) that drivesthe lower bounds. Partly as a way of testing this, one might try to see if the present machinery can beextended to apply directly (rather than via [24, 13]) to questions where coupon-collector considerations(correctly) predict a smaller gap, as in the fractional powers of log n in Theorem 7.3. C. The arguments of [23] and [9] give stronger “universality” results; e.g. [23] says that the appropriate G n,p w.h.p. contains every tree respecting the degree bound. Whether this can be proved along present lines emains unclear; if so, it would seem to be more a question of managing some understanding of the class ofuniversal graphs (with, of course, a view to the spread) than of extending Theorem 1.1. D. As mentioned following Corollary 1.8, what prevents us from extending to other values of the dimension k is inadequate control of the spread. (Here it doesn’t really matter whether we think of “assignments” orof the threshold for containing a member of the H in (5).) The difficulty is the same for the related problemof thresholds for existence of designs. We don’t have anything to suggest in the way of a remedy and justindicate one issue, for simplicity sticking to Steiner triple systems (STS’s; see [30] for background); thus X = K n (with n ≡ or ), H is the hypergraph of STS’s, and for the spread (which should be Θ(1 /n ) ), we may take κ = min S ⊆ X ( |H| / |H ∩ (cid:104) S (cid:105)| ) / | S | . (30)Results of Linial and Luria [21] (upper bound) and Keevash [19] (lower bound) give |H| = ((1 + o (1)) n/e ) n / . (31)Viewed enumeratively this is very satisfactory, having been an old conjecture of Wilson [31]. But for presentpurposes, even ignoring our weaker understanding of |H ∩ (cid:104) S (cid:105)| (the number of completions of a partial STS S ), it is not enough: even if this quantity is, as one expects, roughly ( n/e ) n / −| S | , the r.h.s. of (30) can bedominated by the “error” factor (1 + o (1)) n / (6 | S | ) if S is slightly small and the o (1) in (31) is negative. E. Finally, we recall a related conjecture from [17] (stated there only for graphs, but this shouldn’t matter).For F = F H as in Section 7, let p E ( F ) be the least p such that for every H (cid:48) ⊆ H the expected number of(unlabeled) copies of H (cid:48) in H rn,p is at least 1. Then p E ( F ) / is again a trivial lower bound on p c ( F ) —and,where it makes sense, probably more intuitive than q ( F ) or q f ( F ) —and from [17, Conjecture 2] we have: Conjecture 8.1.
There is a universal K such that for every F = F H as above, p c ( F ) ≤ Kp E ( F ) log | X | . Again, we can presumably replace log | X | by log | H | , as would now follow from a positive answer to theobvious question: do we always have q f ( F ) = O ( p E ( F )) ?A CKNOWLEDGMENTS
The first, second and fourth authors were supported by NSF grant DMS-1501962 and BSF Grant 2014290.The third author was supported by NSF grant DMS-1800521.R
EFERENCES
1. N. Alon and Z. F ¨uredi,
Spanning subgraphs of random graphs , Graphs Combin. (1992), 91–94. 122. R. Alweiss, S. Lovett, K. Wu, and J. Zhang, Improved bounds for the sunflower lemma , Preprint,arXiv:1908.08483v1. 3, 4, 5, 6, 73. B. Bollob´as and A. Thomason,
Threshold functions , Combinatorica (1987), 35–38. 1, 24. B´ela Bollob´as, Random graphs , second ed., Cambridge Studies in Advanced Mathematics, vol. 73, Cam-bridge University Press, Cambridge, 2001. 15. A. Dudek, A. Frieze, P. Loh, and S. Speiss,
Optimal divisibility conditions for loose Hamilton cycles in randomhypergraphs , Electron. J. Combin. (2012), Paper 44, 17. 126. P. Erd˝os and R. Rado, Intersection theorems for systems of sets , J. London Math. Soc. (1960), 85–90. 4 . P. Erd˝os and A. R´enyi, On the evolution of random graphs , Magyar Tud. Akad. Mat. Kutat ´o Int. K ¨ozl. (1960), 17–61. 18. A. Ferber, G. Kronenberg, and K. Luh, Optimal threshold for a random graph to be 2-universal , Trans. Amer.Math. Soc. (2019), 4239–4262. 129. A. Ferber, K. Luh, and O. Nguyen,
Embedding large graphs into a random graph , Bull. Lond. Math. Soc. (2017), 784–797. 12, 1310. E. Friedgut, Sharp thresholds of graph properties, and the k -sat problem , J. Amer. Math. Soc. (1999), 1017–1054, With an appendix by J. Bourgain. 111. , Hunting for sharp thresholds , Random Structures Algorithms (2005), 37–51. 112. A. Frieze and G. B. Sorkin, Efficient algorithms for three-dimensional axial and planar random assignmentproblems , Random Structures Algorithms (2015), 160–196. 1, 413. A. Heckel, Random triangles in random graphs , Preprint, arXiv:1802.08472. 12, 1314. S. Janson, T. Łuczak, and A. Rucinski,
Random graphs , Wiley-Interscience Series in Discrete Mathematicsand Optimization, Wiley-Interscience, New York, 2000. 1, 8, 1015. A. Johansson, J. Kahn, and V. Vu,
Factors in random graphs , Random Structures Algorithms (2008),1–28. 2, 4, 11, 1216. J. Kahn, Asymptotics for Shamir’s problem , Preprint, arXiv:1909.06834. 1217. J. Kahn and G. Kalai,
Thresholds and expectation thresholds , Combin. Probab. Comput. (2007), 495–502.1, 2, 11, 1418. R. M. Karp, Reducibility among combinatorial problems , Complexity of computer computations (Proc. Sym-pos., IBM Thomas J. Watson Res. Center, Yorktown Heights, New York.), 1972, pp. 85–103. 419. P. Keevash,
Counting designs , Preprint, arXiv:1504.02909. 1420. M. Krivelevich,
Embedding spanning trees in random graphs , SIAM J. Discrete Math. (2010), 1495–1500.11, 1221. N. Linial and Z. Luria, An upper bound on the number of Steiner triple systems , Random Structures Algo-rithms (2013), 399–406. 1422. N. Lord, Binomial averages when the mean is an integer , Math. Gaz. (2010), 331–332. 823. R. Montgomery, Spanning trees in random graphs , Adv. Math. (2019), 106793, 92. 2, 11, 12, 1324. O. Riordan,
Random cliques in random graphs , Preprint, arXiv:1802.01948. 12, 1325. M. Talagrand,
Are all sets of positive measure essentially convex? , Geometric aspects of functional analysis(Israel, 1992–1994), Oper. Theory Adv. Appl., vol. 77, Birkh¨auser, Basel, 1995, pp. 295–310. 226. ,
The generic chaining , Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2005. 2, 327. ,
Selector processes on classes of sets , Probab. Theory Related Fields (2006), 471–486. 328. ,
Are many small sets explicitly small? , Proceedings of the 2010 ACM International Symposium onTheory of Computing, 2010, pp. 13–35. 1, 2, 329. ,
Upper and lower bounds for stochastic processes , A Series of Modern Surveys in Mathematics,vol. 60, Springer, Heidelberg, 2014. 330. J. H. van Lint and R. M. Wilson,
A course in combinatorics , second ed., Cambridge University Press,Cambridge, 2001. 1431. R. M. Wilson,
Nonisomorphic Steiner triple systems , Math. Z. (1973/74), 303–313. 14 EPARTMENT OF M ATHEMATICS , R
UTGERS U NIVERSITY , P
ISCATAWAY , NJ 08854, USA
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , R
UTGERS U NIVERSITY , P
ISCATAWAY , NJ 08854, USA
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , R
UTGERS U NIVERSITY , P
ISCATAWAY , NJ 08854, USA
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , R
UTGERS U NIVERSITY , P
ISCATAWAY , NJ 08854, USA
E-mail address : [email protected]@math.rutgers.edu