A local lemma via entropy compression
aa r X i v : . [ m a t h . C O ] J a n A local lemma via entropy compression
Rog´erio G. Alves Aldo Procacci Remy Sanchis Departamento de Matem´atica UFOP 35400-000 - Ouro Preto - MG Brazil Departamento de Matem´atica UFMG 30161-970 - Belo Horizonte - MG Brazil
October 8, 2018
Abstract
In the framework of the probabilistic method in combinatorics we present a new locallemma based on an entropy compression random algorithm and we show through examplesthat it can efficiently replace the Moser-Tardos algorithmic version of the Lov´asz LocalLemma as well as its Pegden extension in a restricted setting where the underlying probabilityspace of the bad events is generated by a finite family of i.i.d. uniform random variables. Wethen use this new lemma to improve the upper bound for the β -frugal vertex chromatic indexof a bounded degree graph recently obtained by Ndreca et al. (Eur J Comb 33: 592-609,2012). Keywords : Probabilistic Method in combinatorics; Lov´asz Local Lemma; Randomized algorithms.
MSC numbers : 05D40, 68W20.
The Lov´asz Local Lemma (LLL), originally formulated by Erd˝os and Lov´asz [9], is a powerfultool in the framework of the probabilistic method in combinatorics to prove the existence ofsome combinatorial objects with certain desirable properties (such as a proper coloring of theedges of a graph). The basic idea of the LLL is that the existence of the combinatorial objectunder analysis is guaranteed once a certain event (the good event) in some probability spacehas a non zero probability to occur. In turn, the occurrence of this good event is guaranteedif a family of (bad) events do not occur. The LLL then provides a sufficient condition on theprobabilities of these bad events to ensure that, with non-zero probability, none of them occur.
In order to enunciate the Lov´asz local Lemma we need to introduce the concept of dependencygraph of a family of events. Let G = ( V, E ) be a graph with vertex set V and edge set E . Twovertices v, v ′ ∈ V are adjacent if { v, v ′ } ∈ E . Given v ∈ V , let Γ ∗ G ( v ) be the set of vertices of G adjacent to v (i.e. the neighborhood of v ). We denote Γ G ( v ) = Γ ∗ G ( v ) ∪ { v } . Definition 1.1
Given a family of events A in some probability space, a dependency graph G for this family is a graph with vertex set A and edge set E such that, each event A ∈ A isindependent of the σ -algebra generated by the collection of events A \ Γ G ( A ) . A in some probability space, let A be the complement event of A ∈ A ,so that T A ∈A A is the event that none of the events in the family A occur.Hereafter the product over the empty set is equal to one, if n is an integer then [ n ] denotes theset of integers { , , . . . , n } and if X is a set then | X | denotes its cardinality. Theorem 1.2 (Lov´asz local Lemma)
Let A be a finite collection of events in a probabilityspace. Let G be a dependency graph for the family A . Let µ = { µ A } A ∈A be a collection of realnumbers in [0 , + ∞ ) . If, for each A ∈ A , P rob ( A ) ≤ µ A Φ A ( µ ) (1.1) with Φ A ( µ ) = Y A ′ ∈ Γ G ( A ) (1 + µ A ′ ) (1.2) then P rob (cid:16) \ A ∈A A (cid:17) > i.e. the probability that none of the events in the family A occur is strictly positive. The popularity of the Lov´asz Local Lemma is basically due to the fact that it can be implementedfor a wide class of problems in combinatorics, in such a way that the condition (1.1), once somefew parameters have been suitably tuned, can be easily checked.
Recently a surprising connection between the LLL and the cluster expansion of the abstractpolymer gas, pointed out by Scott and Sokal [22], has attracted the interest of several researchersin the area of statistical mechanics. More precisely, Scott and Sokal showed that the LLL canbe viewed as a reformulation of the Dobrushin criterion [8] for the convergence for the clusterexpansion of the hard-core lattice gas on a graph G .Two years after the work of Scott and Sokal, Fern´andez and Procacci [11] obtained a sensibleimprovement of the Dobrushin criterion via cluster expansion methods. Such a result has thenbeen used by Bissacot et al. [4] to obtain an improved version of the LLL. To enunciate thisimproved version of the LLL we first need the following definition. Given a graph G , a set ofvertices Y of G is independent if no edge e = { v, v ′ } of G is such that v ∈ Y and v ′ ∈ Y . Theorem 1.3 (Cluster Expansion Local Lemma (CELL))
In the same hypothesis of The-orem 1.2, if, for each A ∈ A , P rob ( A ) ≤ µ A Ψ A ( µ ) (1.3) with Ψ A ( µ ) = X Y ⊆ Γ G ( A ) Y independent Y A ′ ∈ Y µ A ′ (1.4) then P rob (cid:16) \ A ∈A ¯ A (cid:17) > A ( µ ) = Y A ′ ∈ Γ G ( A ) (1 + µ A ′ ) = X Y ⊆ Γ G ( A ) Y A ′ ∈ Y µ A ′ ≥ X Y ⊆ Γ G ( A ) Y indep in G Y A ′ ∈ Y µ A ′ = Ψ A ( µ ) Remark . In general it might be not easy to compute the function Ψ A ( µ ) defined in (1.4).However, when Γ G ( A ) is the union of m cliques c ( A ) , . . . , c m ( A ), then settingΞ A ( µ ) = k Y i =1 h X A ′ ∈ c i ( A ) µ A ′ i (1.5)we have Ψ A ( µ ) = X Y ⊆ Γ G ( A ) Y independent Y A ′ ∈ Y µ A ′ ≤ k Y i =1 h X A ′ ∈ c i µ A ′ i = Ξ A ( µ ) (1.6)The above remark allows us to enunciate the so called “clique approximation” of the CELLwhich will be explicitly used Section 2.1 ahead. Theorem 1.4 (Clique approximation of CELL)
In the same hypothesis of Theorem 1.2suppose that the neighborhood Γ G ( A ) in G is the union of c ( A ) , . . . , c m ( A ) cliques. If, foreach A ∈ A , P rob ( A ) ≤ µ A Ξ A ( µ ) (1.7) where Ξ A ( µ ) is defined in (1.5), then P rob ( \ A ∈A ¯ A ) > is stronger than Theorem1.2 and much easier to implement than Theorem 1.3. So much so that, as far as we know,Theorem 1.4 is until today the only tool that has been used to improve on previous boundsvia LLL. Namely, via Theorem 1.4, better estimates for latin transversal (see [4]) and for severalchromatic indices of bounded degree graphs (see [18] and [5]) have been obtained. Despite its popularity, the LLL has been object of a recurring criticism about its inherently non-constructive character. Specifically, the LLL provides sufficient conditions for the probabilitythat none of the undesirable events in some suitable probability space occur to be strictly positiveand this implies the existence of at least one outcome in the probability space which realizesthe occurrence of the “good” event. However the LLL, in its original version of Theorem 1.2,as well as in the improved version of Theorem 1.3, does not provide any algorithm able to findsuch a configuration. The efforts to devise an algorithmic version of the LLL, going back to thework of Beck [3], Alon [1] and others (see e.g. reference list in [16]), finally culminated withthe breakthrough work by Moser and Tardos [15, 17], who presented a fully algorithmic version Theorem 1.4 can be weaker than the original LLL Theorem 1.2 when the identified cliques c ( A ) , . . . , c m ( A )are too much overlapped. This happens for example in the case of perfect and separating hash families (see [20]). With the exception of [20] where the full CELL has been used to improve bounds for Hash families.
3f LLL in a specific “variable setting” which however covers basically all known applications ofLLL.
The variable setting . Let V be a finite family of mutually independent random variables and letΩ be the probability space determined by these variables. As usual, if X ⊂ V , then Ω | X willdenote the restriction of the sample space Ω to the variables in X and if ω ∈ Ω, we denote by ω | X the restriction (or projection) of the configuration ω to the restricted space Ω | X .Let A be a finite family of events on Ω, so that each A ∈ A has a given probability P rob ( A )to occur and depends in principle on all variables in V . Moser and Tardos made the criticalassumption that each A ∈ A depends actually only on some subset of the random variables ofthe family V and they denoted by vbl ( A ) the minimal (with respect to inclusion) subset of V onwhich A depends. In other words, to decide whether or not the event A occurs, it is sufficientto look only at the value of the variables in vbl ( A ). In particular, the event A is independent ofthe σ -algebra generated by the variables in V \ vbl ( A ). Since variables in V are assumed to bemutually independent, any two events { A, A ′ } ∈ A such that vbl ( A ) ∩ vlb ( A ′ ) = ∅ are necessarilyindependent. Therefore the family A has a natural dependency graph, i.e. the graph G withvertex-set A and edge-set E constituted by the pairs { A, A ′ } ⊂ A such that vbl ( A ) ∩ vbl ( A ′ ) = ∅ .In this framework Moser and Tardos defined the following algorithm. MT-Algorithm .- Step 0. Choose a random evaluation of all the variables in V .- Step i . If some of the events of the family A occur, select one of them (at random or accordingto some deterministic rule), say A , and take a new evaluation (resampling) only of the variablesin vbl ( A ), keeping unchanged all the other variables in V \ vbl ( A ).The algorithm stops when an evaluation of the variables in V is reached such that none of theevents in the family A occur.Within this “variable setting” Moser and Tardos proved [17] an algorithmic version of the LLLidentical to Theorem 1.2, They also proved that their MT-algorithm finds an evaluation of thevariable in V such that none of the bad events of the family A occur in an expected timeproportional to the number of events in the family A . This algorithmic version of the LLLproposed by Moser and Tardos has been very recently improved by Pegden [19] by replacingcondition (1.1) with condition (1.8) using once again the connection with cluster expansion (seealso [2] and [13]). Pegden’s result is as follows. Theorem 1.5 (Algorithmic CELL)
Let V be a finite set of mutually independent randomvariables and let Ω be the probability space generated by these variables. Let A be a finite setof events in Ω with each A ∈ A depending on a subset vbl ( A ) of the variables V . Let G be thenatural dependency graph for the family A . Let µ = { µ A } A ∈A be a collection of real numbers in [0 , + ∞ ) . If, for each A ∈ A , P rob ( A ) ≤ µ A Ψ A ( µ ) (1.8) with Ψ A ( µ ) = X Y ⊆ Γ G ( A ) Y independent Y A ′ ∈ Y µ A ′ (1.9) then there is an evaluation of the variables V such that none of the events in the family A occurand the MT-algorithm finds such an evaluation in an expected total number of steps equal to P A ∈A µ A . .4 The objective of the present paper Theorem 1.5 is a significant advance, but it is not the end of the story. Due to the simplicityof the MT-algorithm, the question has been raised as to whether it is possible to design somemore sophisticated and possibly more efficient algorithm, eventually depending on the specificproblem treated, able to improve the bounds given by the LLL or CELL. These ideas have beenvery recently developed by several authors (see e.g. [6, 10, 21] and references therein) and newalgorithms have been implemented for some specific graph coloring problems to obtain boundswhich are sensibly better than those given by LLL and/or CELL. In particular, Esperet andParreau [10] devised an algorithm able to obtain new bounds for the chromatic indices of theacyclic edge coloring and star coloring of a graph with maximum degree ∆. Moreover the sameauthors suggested that their algorithm based on entropy compression is sufficiently general totreat most of the applications in graph coloring problems covered by the LLL.In this note we show that if one further restricts the Moser-Tardos variable setting by imposingthat the random variables of the set V are not only independent but also uniformly identicallydistributed and taking values in a finite set, then, by developing the ideas illustrated in [10], itis possible to formulate a criterion, that we have called “Entropy Compression Lemma” which isalternative to either the LLL or CELL, and whose implementation for a huge class of problemsin combinatorics (including problems not related with graph coloring) is, we think, even morestraightforward than that of the LLL and CELL. We also show through known examples thatthis new Lemma generally improves on the LLL and the CELL. We finally apply the EntropyCompression Lemma to obtain an upper bound for the β -frugal chromatic index of a graph withmaximum degree ∆ which improves the best known bound given recently in [18].The rest of the paper is organized as follows. In Section 2 we introduce the notations and wepresent our main result (Theorem 2.3). In Section 3 we give the proof of Theorem 2.3. Finallyin Section 4 we present an application about coloring graphs frugally. The i.i.d. uniform variable setting . As in the Moser-Tardos framework, let V = { x , . . . , x N } be a set of N ∈ N mutually independent random variables on a common probability space. Wefurther require that each random variable x i in V takes values in the same set of integers [ k ]according to the uniform distribution. In other words the variables { x , . . . , x N } are independentand identically distributed (i.i.d.). With these assumptions the sample space generated by thevariables in V is Ω = [ k ] N so that an outcome ω ∈ Ω is just an ordered N -tuple ω = ( k , . . . , k N )with k i ∈ [ k ] for all i ∈ [ N ]. The probability space determined by the variables V is the triple(Ω , F , µ ) where F = 2 Ω is the σ -algebra and, for any event A ∈ F , µ ( A ) = | A | /k N is itsprobability ( | A | is the number of outcomes which realize A ).We define the set E A ⊂ N as E A = { l ∈ N : ∃ A ∈ A s.t. | vbl ( A ) | = l } (2.1)and, for l ∈ E A we let A l = { A ∈ A : | vbl ( A ) | = l } . I.e. A l is the subfamily of A containing allthe events of size l so that the family A is the disjoint union of sub-families {A l } l ∈ E A .For y ∈ V , we further denote by A ( y ) the subfamily of A given by A ( y ) = { A ∈ A : y ∈ vbl ( A ) } and shortly, for y ∈ V and l ∈ E A , A l ( y ) = A l ∩ A ( y ). The expression “entropy compression” in reference to the Moser Tardos algorithm and its variants was prob-ably first used by Terence Tao in a note about Moser-Tardos method he published in his Blog in 2009 [23]. efinition 2.1 Let A ∈ A be an event depending on variables vbl ( A ) ⊂ V and let X vbl ( A ) .We say that X is a “seed” of A if, for any two realizations ω, ˜ ω ∈ Ω | vbl ( A ) of A , ω | X = ˜ ω | X implies ω = ˜ ω and no Y X has this property. In other words a proper subset X ⊂ vbl ( A ) is called a seed for A if, for any realization ω ∈ Ω | vbl ( A ) of A , the knowledge of ω | X it is sufficient to reconstruct uniquely the full realization ω and no Y X has this property. Note also that if A is elementary, then X = ∅ is the unique seed of A . Definition 2.2
An event A ∈ A is said recordable if for all y ∈ vbl ( A ) there exists X ⊂ vbl ( A ) \ { y } which is a seed of A . Remark . Note any elementary event A is always recordable since one can choose for any y ∈ vbl ( A ), X = ∅ as seed of A contained in vbl ( A ) \ { y } . Therefore any event which is notrecordable can always be seen as the union of smaller (in the sense of inclusion) recordableevents (ultimately as union of elementary events). We will thus suppose hereafter, without lossof generality, that all events in the family A are recordable.We need now to introduce now some further definitions which will allow us to describe a generalalgorithm, called EC-algorithm (from Entropy-Compression), able to find an evaluation of thevariables V such that no bad event in the family A occurs.Given y ∈ V and A ∈ A ( y ), we define κ y ( A ) = min {| X | : X ⊂ vbl ( A ) \ { y } and X is a seed of A } (2.2)Note that, for any y ∈ V and A ∈ A l ( y ), κ y ( A ) ∈ { , , . . . , l − } and the case κ y ( A ) = 0 onlyhappens when the event A is elementary.Let y ∈ V . We define K l ( y ) and K l as the following sets of integers. K l ( y ) = n κ ∈ { , , . . . , l − } : κ y ( A ) = κ for some A ∈ A l ( y ) o K l = [ y ∈V K l ( y ) (2.3)Note that K l ⊂ { , , . . . , l − } . Once we have the set of integers E A and, for each l ∈ E A , theset of integers K l we define, for x ∈ R , φ E A ( x ) = 1 + X l ∈ E A X κ ∈ K l x l − κ (2.4)We finally define, for κ ∈ K l , the set A κl ( y ) = { A ∈ A l ( y ) : κ y ( A ) = κ } and the integer d κl = max y ∈V |A κl ( y ) | (2.5)Therefore, the number of events A ∈ A of size l sharing a fixed variable y in V and such that κ y ( A ) = κ is bounded above by d κl uniformly in y ∈ V .We are now ready to describe the EC-algorithm, which, as we will see in the next section, is thekey ingredient in order to prove the main result of this note, i.e. Theorem 2.3 below. Hereafter6e will assume that a total order is fixed on the set of variables V as well as on the set of events A . We also choose, for each y ∈ V and A ∈ A ( y ), a unique subset G ( A, y ) ⊂ vbl ( A ) \ { y } suchthat G ( A, y ) is a seed of A and | G ( A, y ) | = κ y ( A ) (if κ y ( A ) = 0, then G ( A, y ) = ∅ ). We willoften say that a variable x ∈ V is colored if a value in [ k ] has been assigned to x . In other words,to color a variable x ∈ V means to select for it a value s ∈ [ k ]. It is then clear that to uncolora colored variable x means that the assigned value in [ k ] of the variable x is withdrawn and thevariable is left undetermined. EC-Algorithm .- Step i . Let y be the smallest variable (in the total order chosen) uncolored (i.e. with noassigned value) and take a random evaluation of the value of this variable y (i.e. color thisvariable). i ) If after the coloring of the variable y no bad event A ∈ A occurs, then go to the step i + 1. i ) If on the contrary after the coloring of the value of the variable y some set of bad events,say S i ⊂ A , occurs, then necessarily S i ⊂ A ( y ). According to the total order previouslyfixed in the family A , select the event A which is the smallest in the set S i . Such event A will belong to the set A κl ( y ) for some l ∈ E A and some κ ∈ K l . Now uncolor all variablesin vbl ( A ) except the κ variables belonging to the set G ( A, y ) previously introduced (inother words, uncolor all the l − κ variables in vbl ( A ) \ G ( A, y )). Then go to the step i + 1.- The algorithm stops when all variables has been evaluated (i.e. colored) and no event of thefamily A occurs.We are now in the position to state the main theorem of this note. Theorem 2.3 (Entropy-compression Lemma)
Let V be a finite set of i.i.d. uniform ran-dom variables taking value in [ k ] . Let A be a finite family of events depending on these variables.If k > (cid:20) inf x> φ E A ( x ) x (cid:21) max l ∈ E A κ ∈ Kl n ( d κl ) l − κ o (2.6) then there is an evaluation of the variables V such that none of the events in the family A occurand the EC-algorithm finds such an evaluation, almost surely in an expected running time linearin the number of variables. Remark 1 . Note that inf x> φ E A ( x ) x = φ ′ E A ( τ ) (2.7)where τ the unique positive root of the equation φ E A ( x ) − xφ ′ E A ( x ) = 0. Remark 2 . The EC-algorithm still finds an evaluation of the variables V such that none of theevents in the family A occur even if we only demand k to be greater or equal (instead of strictlygreater) than the l.h.s. of (2.6), but in this case we have no control on the expected runningtime. 7 .1 Comparison and examples It is instructive and illuminating, we think, to start by comparing Theorem 2.3 with the cliqueapproximation of the algorithmic Cluster expansion Local Lemma, namely, Theorem 1.4. Infact, as we will see below, in the i.i.d. uniform variable setting the function Ξ A ( µ ) defined in(1.5) can be computed explicitly and it turns to exhibit a structure such that the comparisonbetween conditions (1.7) and (2.6) becomes manifest pointing out very clearly how and whenTheorem 2.3 beats Theorem 1.4, which, we repeat, has been the tool used in recent times toimprove bounds previously obtained via the classical version of LLL given by Theorem 1.2. Entropy Compression Lemma versus Clique approximation of the CELL
For the benefit of the reader we start by recalling that in the i.i.d. uniform variable setting itis given a finite set V of N i.i.d. uniform random variables taking value in [ k ] and a family ofevents A in the probability space determined by the variables V . Each event A ∈ A dependsactually on a subset vbl ( A ) ⊂ V and the dependency graph for the family A is the graph G withvertex set A and edge set E formed by those { A, A ′ } such that vbl ( A ) ∩ vbl ( A ′ ) = ∅ . Let now E A and K l be the sets defined in (2.1) and (2.3) respectively.Let l ∈ E A . For any event A ∈ A l let us define, recalling definition (2.2), the integer κ A ∈ K l as follows κ A = min y ∈ vbl ( A ) κ y ( A )Given ( l, κ ) ∈ E A × K l , let us refer to the events A ∈ A such that A ∈ A l and κ A = κ as an“event of size l and type κ ”. If A is an event of size l and type κ , then, by definition, there is aseed X of A such that | X | = κ and, recalling Definition 2.1, this immediately implies that P rob ( A ) ≤ k l − κ Moreover, recalling definition (2.5), the integer d κl is, for any ( l, κ ) ∈ E A × K l , un upper boundfor the number of events of size l and of type κ containing a fixed variable. This implies that theneighborhood Γ G ( A ) of an event A of size l and type κ is the union of l cliques (one clique foreach variables in vbl ( A )) and, for any m ∈ E A and any λ ∈ K m , each of these cliques containsat most d λm events of size m and type λ .In order to set the condition (1.7), let { µ l,κ } ( l,κ ) ∈ E A × K l be a set of positive numbers and let uschose the set { µ A } A ∈A in condition (1.7) as follows µ A = µ l,κ whenever A ∈ A l and κ A = κ In other words, we associate to all events of the same size l and of the same type κ the sameparameter µ l,κ . With these notations and definitions the function Ξ A ( µ ) defined in (1.5) is suchthat Ξ A ( µ ) ≤ h X m ∈ E A X λ ∈ K m d λs µ m,λ i l . (2.8)Let us now set, for all ( l, κ ) ∈ E A × K l , µ l,κ = x l − κ /d κl with x > A ( µ ) ≤ h X m ∈ E A X λ ∈ K m d λs µ m,λ i l = h X m ∈ E A X λ ∈ K m x m − λ i l = [ φ E A ( x )] l φ E A ( x ) is the very same function defined in the statement ofTheorem 2.3.Therefore condition (1.8) of Theorem 1.5 is satisfied if the following inequality holds for all( l, κ ) ∈ E A × K l . 1 k l − κ ≤ d κl x l − κ [ φ E A ( x )] l for all ( l, κ ) ∈ E A × K l (2.9)The condition (2.9) above can be rewritten as follows. k ≥ max l.κ " inf x> [ φ E ( x )] ll − κ x ( d κl ) l − κ (2.10)Note that inequality (2.10) is notably similar to the inequality (2.6) which, once satisfied, guar-antees that the thesis of Theorem 2.3 is true.Since for all ( l, κ, x ) ∈ E A × K l × (0 , ∞ ) we have that ll − κ > φ E ( x ) >
1, it holds[ φ E A ( x )] ll − κ x ≥ φ E A ( x ) x for all ( l, κ ) ∈ E A × K l and for all x > l, κ ) ∈ E A × K l , we haveinf x> [ φ E A ( x )] ll − κ x ≥ inf x> [ φ E A ( x )] x (2.11)and the equality in (2.11) only holds if κ = 0.Therefore Condition (2.6) is always better than Condition (2.10) and these two Conditionscoincide only when K l = { } for all l . In other words Theorem 2.3 is never beaten by Theorem1.4 and Theorem 2.3 beats Theorem 1.4 every time the maximum of ( d κl ) / ( l − κ ) is attained at avalue ( l, κ ) with κ = 0.Let us conclude this section by illustrating how straightforwardly and how generally Theorem2.3 can be applied by considering three well known and, hopefully, pedagogical examples two ofwhich being not graph coloring problems. Property B for m -uniform and n -regular Hypergraphs Consider a finite set V and let P ( V ) = { U ⊂ V : | U | ≥ } . Then a (loopless) hypergraph is apair H = ( V, E ) where E ⊂ P ( V ); the elements of V are called vertices of the hypergraph andthe elements of E are called edges of the hypergraph. A hypergraph H = ( V, E ) is m -uniformif every edge f ∈ E contains exactly m vertices and it is n -regular if every vertex v ∈ V iscontained in exactly n edges. A hypergraph H = ( V, E ) has the property B if there is a coloringof V by two colors such that no edge f ∈ E is monochromatic and, if H = ( V, E ) is m -uniformand n -regular and has the property B, then we say that the pair ( m, n ) is good.Using LLL (i.e Theorem 1.2) Erd˝os and Lov´asz [9] showed that a m -uniform and n -regularhypergraph has the property B (i.e. the pair ( m, n ) is good) if2 m − ≥ e [ m ( n −
1) + 1] (2.12)which implies that the pair (9 ,
11) is good . Actually the best result is due to McDiarmid [14] who proved that the pair (8 ,
12) is good by using in a veryclever way a ‘lopsided’ variant of the Lov´asz Local Lemma m -uniform and n -regular hypergraph H = ( V, E ) bychoosing at random independently and uniformly from a set of two colors.We have then a set V = { x v } v ∈ V of i.i.d. uniform random variables and each variable x v takestwo values corresponding to the two possible colors (i.e. k = 2).Given and edge f = { v , . . . , v m } ∈ E , let A f be the event that all vertices in f receive the samecolor. In other words A f = { x v = x v = · · · = x v m } . We thus have a family of (bad) events A = { A f } f ∈ E .Theorem 2.3 tells us that when condition (2.6) is satisfied (with k = 2), then there is a two-coloring of the vertex set V of H such that none of the events in the family A occur, namely,there is a coloring such that no edge of H is monochromatic, i.e., H has the property B.Now observe that for any A f ∈ A we have that vbl ( A f ) = { x v } v ∈ f and thus, recalling that H is m -uniform, | vbl ( A f ) | = m . Therefore we simply have E A = { m } . Moreover for any v ∈ V andfor any A ∈ A m ( x v ) we have that κ x v ( A ) = 1 and hence K m = { } .Therefore the function φ E A ( x ) defined in (2.4) is in the present case φ E A ( x ) = 1 + x m − so thatinf x> (cid:20) φ E A ( x ) x (cid:21) = ( m − m − m − m − (2.13)Finally, recalling that H is n -regular, the number d m which represents the maximum numberof events of size m containing a fixed vertex (i.e the number of edges of H containing a fixedvertex), is simply d m = n (2.14)We have now all the ingredients to check condition (2.6). Namely, there is a two-coloring of thevertex set V of H such that no edge is monochromatic as soon as2 ≥ ( m − m − m − m − n m − (2.15)i.e., as soon as 2 m − ≥ (cid:20) m − (cid:21) m − ( m − n (2.16)Condition (2.16) tells us that the pair (9 ,
12) is good.Finally let us observe that using judiciously Theorem 1.4 one obtains the condition2 m − ≥ " m − n + m ( n − n ( m − (cid:18) m − (cid:19) m − ( m − n which is clearly worst than (2.16) but better than (2.12). In fact, it is sufficiently better than(2.12) to also allow to conclude that the pair (9 ,
12) is good.
Independent sets .Let G be a graph with vertex set V , edge set E and with maximum degree ∆. Consider apartition of the vertex set V of G as follows: V = ∪ ni =1 V i with | V i | ≥ k . We want to find theleast k such that there exists an independent subset I of V with exactly one vertex in each V i .Without loss of generality, we can assume that G is such that | V i | = k for all sets V i . Thegeneral case | V i | ≥ k follows using the graph induced by G on a union of n subsets of cardinality k , each of them a subset of one V i . 10et us explain the i.i.d. variable setting for the present problem. We start by choosing, for each i ∈ [ n ], a bijection f i : [ k ] → V i . Then consider the set of i.i.d random variables V = { x i } i ∈ [ n ] taking value in [ k ], in such a way that when x i takes the values j ∈ [ k ], then the vertex v = f i ( j )is selected in the set V i . Let now for each { i, i ′ } ⊂ [ n ], A { i,i ′ } be the following event in theprobability space generated by variables V : A { i,i ′ } = n x i = j, x i ′ = j ′ and { f i ( j ) , f i ′ ( j ′ ) } ∈ E o In other words A { i,i ′ } is the event that we select the vertex v in V i and the vertex v ′ in V i ′ and { v, v ′ } is an edge of G .Consider now the family of events A = { A { i,i ′ } } { i,i ′ }⊂ [ n ] . Clearly any outcome ω ∈ [ k ] n suchthat none of the events in the family A occur gives an independent set of G .In the present case we have vbl ( A { i,i ′ } ) = { x i , x i ′ } so any event in A has size l = 2. Moreoversince A { i,i ′ } are elementary events, we have κ y ( A ) = 0 for any y ∈ V and any A ∈ A . Thismeans that E A = { } and K = { } , so thatinf x> φ E A ( x ) x = 2Finally d represents in the present case the number of events A { i,i ′ } containing a fixed variable.Since a variable selects k vertices of G and each of these vertices is adjacent to at most ∆ othervertices of G , we get that d ≤ k ∆. Therefore condition (2.6) is written as follows k ≥ k ∆) which is to say k ≥ K l = { } for any l ∈ E A . In the next example, where K l = { } for some l ∈ E A ,Theorem 2.3 beats CELL. Van der Waerden numbers .Given two integers m and c , the van der Waerden number W ( c, m ) is the least integer N suchthat if the set { , , . . . , N } is colored using c colors then there is a monochromatic arithmeticprogression with m terms.We have N random variables i.i.d. V = { x , . . . , x N } each varying in the set [ c ]. Let P denotesthe set of all arithmetic progressions with m terms in [ N ] and let, for p ∈ P , A p be theevent that the m-term progression p is monochromatic. Consider now the family of bad events A = { A p } p ∈P . So, all bad events are of size m , i.e. l = m and clearly we have κ y ( A ) = 1 forany y ∈ V and any A ∈ A and thus E A = { m } , K m = 1 and thereforeinf x> φ E A ( x ) x = m − m − m − m − = (cid:16) m − (cid:17) ( m − m − Moreover we have d m ≤ m · ⌊ Nm ⌋ ≤ N (this is the number of arithmetic progressions with m terms in [ N ] containing a fixed number). We have now all the ingredients to apply Theorem2.3. So we get that, whatever the coloring is, there is no monochromatic arithmetic progressionwith m terms if c ≥ (cid:16) m − (cid:17) ( m − m − N m − W ( c, m ) > (cid:16) m − (cid:17) m − · c m − m − We start by noting that |A κl ( y ) ∩ S i | ≤ d κl (with d κl the integer previously defined). Thereforethe event A selected during procedure i ) of the EC-algorithm described in the previous sectionis uniquely determined by a triple ( l, κ, s ) with l ∈ E A , κ ∈ K l and s ∈ [ d κl ]. Observe also thatat the end of procedure i ) the variable y is always uncolored.The EC-algorithm at each step i selects at random a number in the list [ k ] and attributesthis number to the smallest uncolored variable at step i . Let us suppose that these values areselected sequentially from the entries of a large vector F t ∈ [ k ] t with t ∈ N sufficiently large.Hence F t = ( f , . . . , f t ) with f j ∈ [ k ] for j = 1 , . . . , t and at step i of the algorithm the variable y to be evaluated receives the value f i ∈ [ k ] which is the entry i of F t . We will show belowthat the vector F t is uniquely determined by a “record” of the algorithm, namely a pair ( R t , C t )where R t is a string r · · · r i · · · r t and C t is a function from V to [ k ] ∪ { } . Definition of the string R t = r · · · r i , · · · r t . Suppose we are running the algorithm for t stepsselecting the values of the variables sequentially from the entries of a fixed vector F t ∈ [ k ] t . Foreach step i of the algorithm define r i as follows- when at step i no bad event occur, put r i = 0- when at step i the bad event A ∈ S i , uniquely determined by the triple ( l, κ, s ), is selected,then put r i = ( l, κ, s ) (we remind: l ∈ E A , κ ∈ K l and s ∈ [ d κl ]). Definition of the function C t . Denote by X t ⊂ V the set of uncolored variables after the step t (i.e. their values are still not assigned). If x ∈ V \ X t , then its value is assigned after t stepand we call C t ( x ) ∈ [ k ] its assigned value. Let C t : V → [ k ] ∪ { } be the function that assigns toeach variable x ∈ V either the value C t ( x ) if x ∈ V \ X t or 0 if x ∈ X t . So the function C t tellsus which variables are fixed (and at which values) and which are not fixed yet after t steps ofthe algorithm.Observe that both R t and C t are uniquely determined by F t via the set of instructions of thealgorithm. We call M the map F t ( R t , C t ). The point is to prove that also the converse istrue or, in other words, that M is an injection. This means that the knowledge of the record( R t , C t ) permits to reconstruct uniquely F t and thus all the t steps made by the algorithm. Lemma 3.1
The set X t is uniquely determined by the string R t . Proof : We prove the statement by induction on t . At step t = 1 the first variable x (in the totalorder chosen) is colored and we have R = r with either r = 0 when no bad event happens atstep 1, or r = (1 , , s ) when a bad event A with vbl ( A ) = { x } happens at step 1 (thus A has12ize 1 and, being elementary, it such that κ x ( A ) = 0 and s ∈ [ k ] indicates the value of variable x which causes the occurrence of the event A ). So X = V \ { x } (when r = 0) and X = V (when r = (1 , , s ). Suppose now t ≥
2. By induction hypothesis X t − is uniquely determinedby R t − . The smallest variable in the set X t − , call it y , is the variable that will be selected atstep t . If r t = 0, this means that the selection of the variable y has produced no bad events andso X t = X t − \ { y } . If r t = ( l, κ, s ) then ( l, κ, s ) determines uniquely an event A among thoseof size l containing the variable y and we know that all variables in vbl ( A ) have been uncoloredexcept those in G ( A, y ) ⊂ vbl ( A ) so that we have X t = X t − \{ vbl ( A ) \ G ( A, y ) } . (cid:3) Lemma 3.2
For any t ∈ N , the map M that assigns to each vector F ∈ [ k ] t the pair ( R t , C t ) is injective. Proof : We prove by induction that the pair ( R t , C t ) uniquely determines F t (in other wordswe prove that |M − ( R, C t ) | = 1. The claim is trivially true for t = 1. Indeed, given R = r with r = 0, C ( x ) is the value attributed to the first variable x . So the (one-dimensional)vector F = ( f ) has entry f = C ( x ). On the other hand, given R = r with r = (1 , , s )we know that event A = { x = s } has occurred and thus the (one-dimensional) vector F = ( f )has entry f = s . Suppose now the claim true for t −
1. Namely, the knowledge of the pair( R t − , C t − ) implies the knowledge of the vector F t − . Therefore we have to show that, giventhe pair ( R t , C t ) we must be able to find the entry f t of the vector F t and the function C t − (sothat ( R t − , C t − ) gives us all the remaining entries of the vector F t ).We know through the Lemma 3.1 the set X t − and hence we know the smallest variable uncoloredafter t − y . We have to consider two cases: a) ( R t , C t ) is such that r t = 0; b)( R t , C t ) is such that r t = ( l, κ, s ). The case a) is easy. Indeed if r t = 0 then the variable y willbe colored at step t and so f t = C t ( y ) while C t − is such that C t − ( x ) = C t ( x ) for all x ∈ V \ { y } and C t − ( y ) = 0. Let us now consider the case b) i.e. r t = ( l, κ, s ). Recall that the triple( l, κ, s ) uniquely determines an event A . For this event A , occurring after the coloring of thevariable y at the beginning of step t , we know the subset G ( A, y ) ⊂ vbl ( A ) of the variables thatwill continue to stay colored after the conclusion of step t . Now we have C t so we know thevalue of the κ variables belonging to G ( A, y ). Since G ( A, y ) is a seed of A , the knowledge ofthe variable in G ( A, y ) (recall the definition given in (2.2)) allows us to deduce which was thevalue of all the other variables in vbl ( A ) \ G ( A, y ). Call k ∗ x the colors of the variable x ∈ vbl ( A )uniquely determined by the coloring of G ( A, y ). Then f t = k ∗ y , while C t − ( x ) = k ∗ x when x ∈ vbl ( A ) \ ( G ( A, y ) ∪ { y } ) and C t − ( x ) = C t ( x ) otherwise. (cid:3) Let now denote by F t the subset of [ k ] t formed by all vectors F t for which after t step of thealgorithm there still are uncolored variables (i.e. the algorithm does not stop at step t ). In otherwords let F t be the set of vectors F t such that C − t (0) = ∅ . Clearly we have |F t | ≤ k t and ifwe are able to prove that this inequality is strict, then this means that the set [ k ] t \ F t is nonempty and for each vector F t ∈ [ k ] t \ F t , the algorithm stops.Let r ∈ [ N ]. We further consider the set F rt formed by all vectors F t for which after t step of thealgorithm there are exactly r variables not colored. In other words let F rt be the set of vectors F t such that | C − t (0) | = r Clearly F t is the disjoint union of the family {F rt } r ∈ [ N ] .Let ( R t , C t ) (resp. ( R rt , C rt )) set of all record produced with vectors in F t (resp. F rt ), in otherwords ( R t , C t ) = M ( F t ) (resp.( R rt , C rt ) = M ( F rt )) is the image of F t through the map M . Wehave, as a consequence of Lemma 3.2, the following proposition.13 roposition 3.3 |F t | ≤ ( k + 1) N N X r =1 |R rt | (3.1) Proof . Since F t is the disjoint union of the family {F rt } r ∈ [ N ] we have that |F t | = N X r =1 |F rt | (3.2)From Lemma 3.2 it follows that |F rt | ≤ |R rt ||C rt | ≤ |R rt ||C t | ≤ |R rt | ( k + 1) N (3.3)since C t is contained in the set of all functions from V to [ k ] ∪{ } which has cardinality ( k +1) |V| =( k + 1) N . Putting (3.3) into (3.2) inequality (3.1) follows. (cid:3) |R rt | A string w . . . w n with w i ∈ { , } is usually called a word on the alphabet { , } . An initialsegment of the word w . . . w n is the a sub-string of w . . . w n of the form w . . . w i with 1 ≤ i ≤ n .A partial Dyck word is word on the alphabet { , } such that in any initial segment of the wordthe number of 0’s is greater or equal than the number of 1’s. A Dyck word on the alphabet { , } is a partial Dyck word with equal number of 0’s and 1’s (hence a Dyck word has always an evennumber of letters). A partial Dyck word can be viewed as a path in Z starting at the originmade by steps either (1 ,
1) or (1 , −
1) in such a way that the path stays in the first quadrant(i.e. the path never goes below the x axis). A Dyck word (of size 2 n ) is then a Dyck path whichstarts at the origin and ends at the point (2 n,
0) of the x axis. A descendant in a partial Dyckword is a sequence of 1’s. For S ⊂ N and let us denote by D t,r,S ( D t,S ) the set of partial Dyckwords (Dyck words) with t t − r t S .Let F = { n ∈ N : exists l ∈ E A and κ ∈ K l such that n = l − κ } (3.4)We now construct a (non injective) map M : R rt → D t,r,F . Definition 3.4
Let R t = ( r , . . . , r t ) ∈ R rt . We define the map M which associates to R t =( r , . . . , r t ) the word R ∗ t = r ∗ . . . r ∗ t on the alphabet { , } as follows. If r i = 0 then put r ∗ i = 0 .If r i = ( l, κ, s ) then put r ∗ i = 0 l − κ times z }| { . . . Lemma 3.5
For any R t ∈ R rt , the word R ∗ t defined above is a partial Dyck word with t ’s and t − r ’s and descendants in the set F defined in (3.4) above. In other words if R t ∈ R rt , then R ∗ t ∈ D t,r,F . Proof . Reading R ∗ t from left to the right, every 0 correspond to a variable that has been coloredand every 1 correspond to a variable that has been uncolored if some bad event occurs. So thenumber o 0’s is t . Let x be the number of 1’s. This number represents by construction thetotal number of variables that have been uncolored during the process of the algorithm. Thus t − x = r , i.e. x = t − r . The string is by construction a partial Dyck word since we cannotuncolor more variables than the number of colored variable. Finally again by construction wehave that the descendants are of size l − κ ∈ F .14 emma 3.6 The pre-image of a string R ∗ t by the map M has cardinality less or equal than (cid:20) max l,κ n ( d κl ) l − κ o(cid:21) t − r Y j ∈ F m D j ( R ∗ t ) j where D j ( R ∗ t ) is the number of descendants in R ∗ t of size l − κ = j and m j is the multiplicity of j = l − κ , i.e. m j = { ( l, κ ) : l − κ = j } Proof.
We have to consider the information we lose in the map M . For each descent of size j in R ∗ t , we have to specify the kind of event that generated this descent, for this, we have m j possibilities. This accounts for the Q j ∈ F m D j ( R ∗ t ) j factor. Once specified the kind of event of agiven descent of size j , i.e., the pair ( l, κ ), we have d κl possible events that generated this descent.This accounts for a factor at most Y j ∈ F " max l,κl − κ = j d κl D j ( R ∗ t ) = Y j ∈ F " max l,κl − κ = j ( d κl ) l − κ jD j ( R ∗ t ) ≤ (cid:20) max l,κ n ( d κl ) l − κ o(cid:21) t − r , since P j ∈ F jD j ( R ∗ t ) = t − r . (cid:3) Remark . Note that φ E ( x ) = 1 + X l ∈ E X κ ∈ K l x l − κ = 1 + X j ∈ F m j x j (3.5)We now have the following lemmas (Lemma 6 and Lemma 7 in [10]). Lemma 3.7
The number of partial Dyck words in D t,r,F is less than the number of Dyck wordsin D t + r ( s − ,F where s = min { F \ { }} . Lemma 3.8
There is a bijection between Dyck words in D t + r ( s − ,F and plane rooted trees on t + r ( s −
1) + 1 vertices and with degree (number of children in the vertices) in F ∪ { } . Using these last three lemmas we arrive at the following proposition.
Proposition 3.9
Let T n denotes the set of all plane rooted trees with n vertices, then we have |R rt | ≤ (cid:20) max l,κ n ( d κl ) l − κ o(cid:21) t − r X T ∈T t + r ( s − Y j ≥ ϕ D j ( T ) j (3.6) where D j ( T ) denotes the number of vertices with j children in the tree T and ϕ j = if j = 0 m j if j ∈ F otherwise (3.7) Proof . The proof follows from the three lemmas above and the observation that X T ∈T t + r ( s − ω ( T ) = X R ∗ ∈D t + r ( s − ,F Y j ∈ F m D j ( R ∗ t ) j We finally use the following theorem (Theorem 5 in [7]).15 heorem 3.10
Let ϕ ( x ) = P j ≥ ϕ j x j , let R be the convergence radius of ϕ ( x ) and let τ be thefirst root in [0 , R ) of the equation xϕ ′ ( x ) − ϕ ( x ) = 0 . Set d = gcd { j > ϕ j > } . Then X T ∈T n Y j ≥ ϕ D j ( T ) j = d (cid:20) ϕ ( τ )2 πϕ ′′ ( τ ) (cid:21) [ ϕ ′ ( τ )] n n / (1 + O ( n − ) ≤ C ϕ [ ϕ ′ ( τ )] n n / (3.8) with C ϕ constant. Inserting now inequality (3.8) into (3.6) and recalling (3.5) we get |R rt | ≤ (cid:20) max l,κ n ( d κl ) l − κ o(cid:21) t − r C ϕ [ φ ′ E ( τ )] t + r ( s − ( t + r ( s −
1) + 1) / (3.9)where C ϕ is the constant appearing in Theorem 3.10 and ϕ j is defined as in (3.7). We are now in the position to end the proof of Theorem 2.3. Indeed, inserting (3.9) into (3.1)we get |F t | ≤ C ϕ ( k + 1) N N X r =1 (cid:20) max l,κ n ( d κl ) l − κ o(cid:21) t − r [ φ ′ E ( τ )] t + r ( s − ( t + r ( s −
1) + 1) / ≤≤ C ( E, N, k ) (cid:20) max l,κ n ( d κl ) l − κ o · φ ′ E ( τ ) (cid:21) t t / with C ( E, N, k ) ≤ C ϕ ( k + 1) N [ φ ′ E ( τ )] N ( s − Hence if k ≥ (cid:20) max l,κ n ( d κl ) l − κ o · φ ′ E ( τ ) (cid:21) (3.10)we have that for t sufficiently large |F t | < k t i.e. the algorithm stops. Note that the condition (3.10) which ensures the algorithm to stopis exactly (2.6) with greater or equal replacing strictly greater. Now note that |F t | /k t is theprobability that after t steps the algorithm is still running. Let us suppose that inequality (3.10)holds strictly. I.e., let us suppose that there exists ε > φ ′ E ( τ ) /k ) · max l,κ ( d κl ) / ( l − κ ) ≤ (1 − ε ). Then the probability P ( t ) that the algorithm is still running after t steps can be boundedabove as follows P ( t ) ≤ C ϕ ( k + 1) N [ φ ′ E ( τ )] N ( s − (1 − ε ) t Let t be the solution of the equation C ϕ ( k + 1) N [ φ ′ E ( τ )] N ( s − (1 − ε ) t = 1Remark that t is linear in N and observe that P ( t + t ) ≤ (1 − ε ) t = e − t | ln(1 − ε ) | . Therefore, ifinequality (3.10) holds strictly, the expected running time T of the EC-algorithm is bounded by T ≤ t + 1 | ln(1 − ε ) | (cid:20) max l,κ n ( d κl ) l − κ o · φ ′ E ( τ ) (cid:21) ≤ (1 − ε ) k (3.11)for some ε >
0, then the expected running time of the algorithm is linear in N . Given a graph G = ( V, E ), a coloring of the vertices of G is proper if no two adjacent verticesreceive the same color and a proper vertex coloring of G is β -frugal if any vertex has at most β members of any color class in its neighborhood. The minimum number of colors required suchthat a graph G has at least one β -frugal proper vertex coloring is called the β -frugal chromaticnumber of G and will be denoted by χ β ( G ). Theorem 4.1
Let G = ( V, E ) be a graph with maximum degree ∆ . Then, for any β ∈ [∆ − we have χ β ( G ) ≤ q ( β )∆ max ( , (cid:18) ∆ β ! (cid:19) β ) (4.1) with q ( β ) = inf x> (cid:20) x + x β x (cid:21) Proof . Let G = ( V, E ) be a graph with vertex set V , edge set E and with maximum degree ∆.Suppose that we color the vertices of G by choosing at random independently and uniformlyfrom a set C of k colors and choose a bijection f : [ k ] → C .We are thus in the i.i.d. variable setting in which the set i.i.d. random variables is V = { x v } v ∈ V are indexed by the set of the vertices V of G and each variable x v takes values in [ k ] such that,for j ∈ [ k ], x v = j means that the vertex v is colored with the color f ( j ) ∈ C .Given v ∈ V , we denote by H vβ the set of all subsets of the neighborhood Γ ∗ ( v ) of v with exactly β + 1 vertices. In other words H vβ = { Y ⊂ Γ ∗ ( v ) : | Y | = β + 1 } So an element h vβ ∈ H vβ is a set of β + 1 vertices of Γ ∗ G ( v ) (i.e. h vβ = { v , v . . . . , v β +1 } ⊂ Γ ∗ G ( v )).- Given e = { u, v } ∈ E , let A e be the event that u and v receive the same color (i.e. e ismonochromatic).- Given v ∈ V and h vβ ∈ H vβ , we let A h vβ to be the event all the vertices of the set h vβ receive thesame color (i.e. h vβ is monochromatic). .We thus have a family of (bad) events A = { A e } e ∈ E ∪ { A h vβ } v ∈ V,h vβ ∈ H vβ .Theorem 2.3 tells us that when condition (2.6) is satisfied, then it is possible to find a coloring(i.e. an outcome in the sample space generated by the variables V ) avoiding all bad events of thefamily A . I.e. it is possible to find a β -frugal coloring of the vertex set V of G . Therefore, if k is greater or equal to the r.h.s of (2.6) then G admits a β -frugal coloring and hence χ β ( G ) ≤ k .To check thus condition (2.6) let us observe that:- If e = { v, w } ∈ E is an edge of G , then vbl ( A e ) = { x v , x w } and thus | vbl ( A c ) | = 2;17 If h vβ = { v , . . . , v β +1 } is a subset of Γ ∗ ( v ) for some v ∈ V , then vbl ( A h vβ ) = { x v , . . . , x v β +1 } and thus | vbl ( A h vβ ) | = β + 1.Therefore E A = { , β + 1 } and A = { A e } e ∈E and A β +1 = { A h vβ } v ∈ V,h vβ ∈ H vβ Moreover,- for any v ∈ V and for any A ∈ A ( x v ) we have that κ x v ( A ) = 1 and hence K = { } ;- for any v ∈ V and for any A ∈ A β +1 ( x v ) we have that κ x v ( A ) = 1 and hence K β +1 = { } ;Therefore the function φ E A ( x ) defined in (2.4) is in the present case φ E A ( x ) = 1 + x + x β Let us pose q ( β ) = inf x> (cid:20) x + x β x (cid:21) (4.2)Let us now estimate the numbers d and d β +1 . Since d represents the maximum number ofedges of G containing a fixed vertex, we have d ≤ ∆ (4.3)Concerning the estimate of d β +1 , observe that a vertex w ∈ V has at most ∆ neighbors { v , . . . , v ∆ } and therefore w belongs to at most ∆ distinct neighborhoods Γ ∗ ( v ) , . . . Γ ∗ ( v ∆ )and in each Γ ∗ ( v i ) of these neighborhoods w can be contained in at most (cid:0) ∆ β (cid:1) sets of type h v i β = { v i , v i . . . . , v iβ +1 } . Therefore d β +1 ≤ ∆ (cid:18) ∆ β (cid:19) ≤ ∆ β /β ! (4.4)We can now check condition (2.6). Namely, there is a β -frugal coloring of G as soon as k ≥ q ( β ) max n ( d ) − , ( d β +1 ) β +1 − o (4.5)Due to inequalities (4.3) and (4.4), condition (4.5) is surely satisfied if k ≥ q ( β ) max ( ∆ , ∆ β ( β !) /β ) (4.6)and hence χ β ( G ) ≤ q ( β ) max ( ∆ , ∆ β ( β !) /β ) (4.7)which is the bound (4.1). (cid:3) The bound (4.1) represents an improvement on the previous best bound for the β -frugal chro-matic number of a graph with maximum degree ∆ given in [18] (see Sec. 2.6, formula (2.18) in[18] and compare q ( β ) defined in (4.2) above with the functions k ( β ) and k ( β ) there defined:18ne can check that q ( β ) ≤ min { k ( β ) , k ( β ) } ). In particular, for the case of 2-frugal coloring(a.k.a. linear coloring) we get χ ( G ) ≤ √ In [24] it is proved that ∆ is the correct order since by an explicit example the author provesthat χ ( G ) ≥ ∆ / / (6 √
3) while his upper bound (using the LLL) is χ ( G ) ≤ and theimproved upper bound given in [18] is χ ( G ) ≤ . √ ∆ / . Acknowledgments
A.P. and R.S. have been partially supported by the Brazilian agencies Conselho Nacional deDesenvolvimento Cient´ıfico e Tecnol´ogico (CNPq) and Funda¸c˜ao de Amparo `a Pesquisa doEstado de Minas Gerais (FAPEMIG - Programa de Pesquisador Mineiro).
References [1] Alon, N.:
A parallel algorithmic version of the local lemma . Random Structures and Algo-rithms, , n. 4, 367-378 (1991).[2] Alves, R. G.; Procacci, A.: Witness trees in the Moser-Tardos algorithmic Lov´asz LocalLemma and Penrose trees in the hard-core lattice gas , Journal of Statistical Physics, ,p. 877-895 (2014)[3] Beck, J.:
An Algorithmic Approach to the Lovsz Local Lemma , Random Structures andAlgorithms, , n. 4, 343-365 (1991).[4] Bissacot, R.; Fern´andez, R.; Procacci A.; Scoppola, B.: An Improvement of the Lov´aszLocal Lemma via Cluster Expansion , Combinatorics Probability and Computing, , n. 5,709-719 (2011)[5] B¨ottcher, J.; Kohayakawa, Y.; Procacci, A.: Properly coloured copies and rainbow copiesof large graphs with small maximum degree , Random Structures and Algorithms, , n. 4,425-436 (2012).[6] Dujmovic, V.; Joret, G.; Kozik, J.; Wood, D. R.: Nonrepetitive colouring via entropycompression , to appear in Combinatorica, DOI: 10.1007/s00493-015-3070-6.[7] Drmota, M.:
Combinatorics and asymptotics on trees , Cubo J. (2) (2004).[8] Dobrushin , R. L.: Perturbation methods of the theory of Gibbsian fields , in P. Bernard(editor), Lectures on Probability Theory and Statistics, Lecture Notes in MathematicsVolume 1648, 1996, pp 1-66 Springer-Verlag, Berlin, (1996).[9] Erd˝os, P. and Lov´asz, L.:
Problems and results on 3-chromatic hypergraphs and some relatedquestions, in Infinite and finite sets . Vol. II, Colloq. Math. Soc. Janos Bolyai, Vol. 10, pp.609-627. North-Holland, Amsterdam, (1975).[10] Esperet, L.; Parreau, A.;
Acyclic edge-coloring using entropy compression , European Jour-nal of Combinatorics, , In. 6, 1019- 1027 (2013).1911] Fern´andez, R.; Procacci A.: Cluster expansion for abstract polymer models. New boundsfrom an old approach , Communications in Mathematical Physics. , n.1, 123-140 (2007).[12] Gasarch W.; Haeupler B.:
Lower Bounds on van der Waerden Numbers: Randomized andDeterministic-Constructive , The Electronic J. Combinatorics, (2011), Moser and Tardos meet Lov´asz , Proceedings of the 43rdannual ACM symposium on Theory of computing Pages 235-244, ACM New York, NY,USA (2011).[14] McDiarmid. C.:
Hypergraph coloring and the Lov´asz Local Lemma , Discrete Math., , 481-486 (1997).[15] Moser, R. A.:
A constructive proof of the Lov´asz Local Lemma , in Proceedings of the 41stAnnual ACM Symposium on the Theory of Computing (STOC). ACM, New York (2009).[16] Moser, R. A.:
Exact algorithms for constraint satisfaction problems , Diss. ETH Zrich, Nr.20668, Logos Verlag, Berlin (2013).[17] Moser, R. ; Tardos, G.:
A constructive proof of the general Lov´asz Local Lemma , J. ACM Article 11, 15 pages (2010).[18] Ndreca, S.; Procacci, A.; Scoppola, B.:
Improved bounds on coloring of graphs , EuropeanJournal of Combinatorics, , n 4, p. 592-609 (2012).[19] Pegden, W.: An extension of the Moser-Tardos algorithmic local lemma , SIAM J. DiscreteMath. , 911-917 (2013).[20] Procacci, A.; Sanchis, R.: Perfect and separating hash families: new bounds via the algorith-mic cluster expansion local lemma , Annales de l’Institut Henry Poincar´e D combinatorics,physics and their interactions, to appear (2016).[21] Przybylo, J.:
On the Facial Thue Choice Index via Entropy Compression , Journal of GraphTheory, , Issue 3, 180-189, (2014).[22] Scott, A.; Sokal, A. D.: The repulsive lattice gas, the independent-set polynomial, and theLov´asz local lemma , J. Stat. Phys. , no. 5-6, 1151–1261, (2005).[23] Tao, T.:
Moser’s entropy compression argument , Terence Tao Blog post, (2009).[24] Yuster, R.:
Linear coloring of graphs , Discr. Math.185