Superfast Coloring in CONGEST via Efficient Color Sampling
aa r X i v : . [ c s . D C ] M a r Superfast Coloring in CONGEST via Efficient Color Sampling
Magn´us M. Halld´orsson ∗ Alexandre Nolin ∗ March 4, 2021
Abstract
We present a procedure for efficiently sampling colors in the
CONGEST model. It allowsnodes whose number of colors exceeds their number of neighbors by a constant fraction tosample up to Θ(log n ) semi-random colors unused by their neighbors in O (1) rounds, evenin the distance-2 setting. This yields algorithms with O (log ∗ ∆) complexity for differentedge-coloring, vertex coloring, and distance-2 coloring problems, matching the best possible.In particular, we obtain an O (log ∗ ∆)-round CONGEST algorithm for (1 + ǫ )∆-edge coloringwhen ∆ ≥ log / log ∗ n n , and a poly(log log n )-round algorithm for (2∆ − The two primary models of locality,
LOCAL and
CONGEST , share most of the same features: thenodes are connected in the form of an undirected graph, time proceeds in synchronous rounds,and in each round, each node can exchange different messages with each of its neighbors. Thedifference is that the messages can be of arbitrary size in
LOCAL , but only logarithmic in
CONGEST . A question of major current interest is to what extent message sizes matter in orderto achieve fast execution.Random sampling is an important and powerful principle with extensive applications todistributed algorithms. In its basic form, the nodes of the network compute their randomsamples and share it with their neighbors in order to reach collaborative decisions. When thesamples are too large to fit in a single
CONGEST message, then the
LOCAL model seems tohave a clear advantage. The goal of this work is to overcome this handicap and derive equallyefficient
CONGEST algorithms, particularly in the context of coloring problems.Graph coloring is one of the most fundamental topics in distributed computing. In fact, itwas the subject of the first work on distributed graph algorithms by Linial [18]. The task is toeither color the vertices or the edges of the underlying communication graph G so that adjacentvertices/edges receive different colors. The most basic distributed coloring question is to matchwhat is achieved by a simple centralized algorithm that colors the vertices/edges in an arbitraryorder. Thus, our primary focus is on the (∆ + 1)-vertex coloring and the (2∆ − G .Randomized distributed coloring algorithms are generally based on sampling colors from theappropriate domain. The classical and early algorithms for vertex coloring, e.g. [16, 1], involvesampling individual colors and operate therefore equally well in CONGEST . The more recent fastcoloring algorithms, both for vertex [24, 6, 13, 2] and edge coloring [6], all involve a technique ∗ ICE-TCS & Department of Computer Science, Reykjavik University, Iceland. Partially supported by IcelandicResearch Fund grant 174484-051.
1f Schneider and Wattenhofer [24] that uses samples of up to logarithmic number of colors. Infact, there are no published sublogarithmic algorithms (in n or ∆) for these coloring problemsin CONGEST , while there are now poly(log log n )-round algorithms [6, 2, 8] in LOCAL . A casein point is the (2∆ − n , which can be solvedin only O (log ∗ n ) LOCAL rounds [6]. The bottleneck in
CONGEST is the sampling size of theSchneider-Wattenhofer protocol.We present here a technique for sampling a logarithmic number of colors and communicatingthem in only O (1) CONGEST rounds. We apply the technique to a number of coloring problems,allowing us to match in
CONGEST the best complexity known in
LOCAL .The sampling technique is best viewed as making random choices with a limited amount ofrandomness. This is achieved by showing that sampling within an appropriate subfamily of allcolor samples can retain some of the useful statistical properties of a fully random sample. It isinspired by Newman’s theorem in communication complexity [19], where dependence on sharedrandomness is removed through a similar argument.We apply the sampling technique to a number of coloring problems where the nodes/edgesto be colored have a large slack : the number of colors available exceeds by a constant fractionthe number of neighbors. We particularly apply the technique to settings where the maximumdegree ∆ is superlogarithmic (we shall assume ∆ = Ω(log / log ∗ n n )).We obtain a superfast O (log ∗ ∆)-round algorithm for (2∆ − / log ∗ n n ). Independent of ∆, we obtain a poly(log log n )-round algorithm. This showsthat coloring need not be any slower in CONGEST than in
LOCAL .We obtain similar results for vertex coloring, for the same values of ∆ (∆ = Ω(log / log ∗ n n )).We obtain an O (log ∗ ∆)-round algorithm for (1 + ǫ )∆-coloring, for any ǫ >
0. For graphs thatare locally sparse (see Sec. 2 for definition), this gives a (∆ + 1)-coloring in the same timecomplexity. Matching results also hold for the distance-2 coloring problem, where nodes withindistance 2 must receive different colors.
The literature on distributed coloring is vast and we limit this discussion to work that is directlyrelevant to ours, primarily randomized algorithms.An edge coloring of a graph G corresponds to a vertex coloring of its line graph, whosemaximum degree is 2∆( G ) −
2. Therefore,
LOCAL algorithms for (∆ + 1)-vertex coloring yield(2∆ − CONGEST , the situationis different: Because of capacity restrictions, no single node can expect to learn the colors of alledges adjacent to a given edge. In fact, there are no published results on efficient edge-coloringalgorithms in
CONGEST , to the best of our knowledge .A classical simple (probably folklore) algorithm for vertex coloring is for each vertex to pickin each round a color uniformly at random from its current palette , the colors that are notused on neighbors. Each node can be shown to become colored in each round with constantprobability and thus this procedure completes in O (log n ) rounds, w.h.p. [16]. In fact, eachround of this procedure reduces w.h.p. the uncolored degree of each vertex by a constant factor,as long as the degree is Ω(log n ) [1]. Within O (log ∆) rounds we are then in the setting where themaximum uncolored degree of each node is logarithmic. This algorithm works also in CONGEST Fischer, Ghaffari and Kuhn [7] suggest in a footnote that their edge coloring algorithms, described and provenin
LOCAL , actually work in
CONGEST . It does not hold for their randomized edge-coloring result, which appliesthe algorithm of [6].
LOCAL , but does not immediately work foredge coloring in
CONGEST , since it is not clear how to select a color uniformly at random fromthe palette of an edge.Color sampling algorithms along a similar vein have also been studied for edge coloring[20, 10, 3], all running in O (log n ) LOCAL rounds in general. Panconesi and Srinivasan [20]showed that one of the most basic algorithms finds a (1 .
6∆ + log n )-edge coloring. Grableand Panconesi [10] showed that O (log log n ) rounds suffice when ∆ = n Ω(1 / log log n )) . Dubhashi,Grable and Panconesi [3] proposed an algorithm based on the R¨odl nibble technique, where onlya subset of the edges try a color in each round, and showed that it finds a (1 + ǫ )∆-edge coloring,when ∆ = ω (log n ).Sublogarithmic round vertex coloring algorithms have two phases, where the first phase iscompleted once the uncolored degree of the nodes is low (logarithmic or polylogarithmic). Baren-boim et al. [1] showed that within O (log log n ) additional rounds, the graph is shattered : eachconnected component (induced by the uncolored nodes) is of polylogarithmic size. The defaultapproach is then to apply fast deterministic algorithms. With recent progress on network de-composition [23, 8], as well as fast deterministic coloring algorithms [9], the low degree case cannow be solved in poly(log log n ) rounds.Recent years have seen fast LOCAL coloring algorithms that run in sublogarithmic time.These methods depend crucially on a random sampling method of Schneider and Wattenhofer[24] where each node picks as many as log n colors at a time. The method works when eachnode has large slack ; i.e., when the number of colors in the node’s palette is a constant fractionlarger than the number of neighbors (competing for those colors). This holds in particular whencomputing a (1 + ǫ )∆-coloring, for some ǫ >
0, which they achieve in O (log ∗ ∆) rounds, when∆ ≥ log n .In the (∆ + 1)-node coloring and (2∆ − a priori . It turns out that such slack can sometimes be generated by a single roundof color guessing. Suppose the graph is triangle free, or more generally, locally sparse , meaningthat the induced subgraph of each node has many non-adjacent pairs of nodes. Then, when eachnode tries random color, each pair of non-adjacent common neighbors of v has a fair chance ofbeing colored with the same color, which leads to an increase in the slack of v . As shown byElkin, Pettie and Su [6] (with a longer history in graph theory, tracing back at least to Reed[22]), locally sparse graphs will have slack Ω(∆) after this single color sampling round. Linegraphs are locally sparse graphs, and thus we obtain this way a O (log ∗ ∆)-round algorithm for(2∆ − ≥ ∆ . They further obtain a (1 + ǫ )∆-edge list coloringin the same time frame, using the nibble technique of [20].This fast coloring of locally sparse graphs is also useful in (∆ + 1)-vertex coloring. Boththe first sublogarithmic round algorithm of Harris, Schneider, Su [13] and the current fastestalgorithm of Chang, Li, and Pettie [2] partition the graph into a sparse and a dense part, colorthe sparse part with a variation of the method of [24], and synchronize the communication withineach cluster of the dense part to achieve fast coloring.A distance-2 coloring is a vertex coloring such that nodes within distance at most 2 receivedifferent colors. This problem in CONGEST shares a key property with edge coloring: nodescannot obtain a full knowledge of their available palette, but they can try a color by askingtheir neighbors. A recent (∆ + 1)-distance-2 coloring algorithm of [11] that runs in O (log ∆) +poly(log log n ) CONGEST rounds can be used to compute (2∆ − Intuition and preliminaries
Existing O (log ∗ ∆) algorithms for the different coloring problems in LOCAL such as those bySchneider and Wattenhofer [24] all involve sampling several colors in a single round. In suchalgorithms, the nodes try colors in a way that guarantees each color an independent, Ω(1)probability of success. While this probability of success is a given when all nodes try a singlecolor, having each node try several colors in any given round could create more conflicts betweencolors and reduce the probability of success of any given one.This issue is usually solved using slack , the difference between the number of colors unusedby the neighbors of a node and how many of its neighbors are still uncolored. Put another way,slack is the number of colors that is guaranteed to be left untouched by your neighbors for allpossible choices of your currently uncolored neighbors. Slack is a given when we allow morecolors than each node has neighbors, and is otherwise easily generated in a locally sparse graph.If the nodes are all able to try Θ(log n ) colors in O (1) rounds, and all colors have an indepen-dent, Ω(1) probability of success, O (1) rounds suffice to color all nodes w.h.p. However, this isusually not immediately possible, unless all nodes have a large amount of slack from the begin-ning. The O (log ∗ n ) algorithms work through increasing the ratio of slack to uncolored degree,trying more and more colors as this ratio increases, allowing nodes to try Θ(log n ) colors eachwith constant probability over the course of O (log ∗ n ) rounds. The speed comes from the factthat slack never decreases but the uncolored degree of the edges decreases with exponentiallyincreasing speed as the nodes try more and more colors.However, all these algorithms have nodes send Θ(log n ) colors during the algorithm’s execu-tion, which requires Θ(log n · log ∆) bits, i.e., a minimum of Θ(log ∆) CONGEST rounds. Ouralgorithms will also involve having each node try up to Θ(log n ) colors, but without transmittingΘ(log n ) arbitrary colors. While Θ(log n · log ∆) bits are needed to describe an arbitrary choice of Θ(log n ) colors in acolor space of size Θ(∆), being able to describe any choice of Θ(log n ) colors can be unnecessary.To get intuition about this, consider the setting where all nodes have access to a shared sourceof randomness. When trying random colors, the edges do not care about which specific set ofcolors they are trying, all that matters is that the colors they try are random and independentof what other nodes are trying.With a shared source of randomness, instead of sending log ∆ bits to specify a color, a nodecan use the shared random source as a source of random colors and send indices of colors inthe random source. If each random color has a chance ≥ p of having the properties needed tobe tried, the index of the first satisfactory color will be of expected value O (1 /p ) and only take O (log(1 /p )) bits to communicate. The nodes can also use O (log n ) bits to indicate which of thefirst O (log n ) colors in the random source they find satisfactory and decide to try. This techniqueallows the edges to sample Θ( p log n ) colors in a single round of CONGEST . The choices madeby nodes are made independent by having the nodes use disjoint parts of the shared randomness(for example, each node might only use the bits at indices equal to its ID modulo n ). Thistype of saving in the communication based on a shared source of randomness appears in severalplaces in communication complexity, in particular in [14] where it is used with the Disjointnessproblem, and in the folklore protocol for Equality (e.g., Example 3.13 in [17]).It is crucial in the above argument that all nodes have access to a shared source of randomness,as messages making references to the shared randomness lose their meaning without it. Our4oal will now be to remove this need for a shared source of randomness, taking inspirationfrom Newman’s Theorem in communication complexity [19] (Theorem 3.14 in [17], Theorem3.5 in [21]). It is not an application of it, however, as contrary to the 2-party communicationcomplexity setting, distributing a common random seed to all parties would require many roundsin our context, and the success of any node trying one or more colors is interrelated with therandom choices of up to ∆ + 1 parties. Our contribution is best understood as replacing a fullyrandom sample of colors by a pseudorandom one with appropriate statistical guarantees, whoseproof of existence resembles the proof of Newman’s Theorem. We do so in Section 3, and givemultiple applications of this result in subsequent sections. Our results rely heavily on the existence of a family of sets with the right properties, whoseexistence we prove by a probabilistic argument. We make frequent use of the Chernoff-Hoeffdingbounds in this proof, as well as in other parts of the paper. We use a version of the bounds thatholds for negatively associated random variables.
Definition 2.1 (Negative association) . The random variables X , . . . , X n are said to be nega-tively associated if for all disjoint subsets I, J ⊆ [ n ] and all non-decreasing functions f and g , E [ f ( X i , i ∈ I ) · g ( X j , j ∈ J )] ≤ E [ f ( X i , i ∈ I )] · E [ g ( X j , j ∈ J )] Lemma 2.2 (Chernoff-Hoeffding bounds) . Let X , . . . , X n be n negatively associated randomvariables in [0 , , X := P ni =1 X i their sum, and let the expectation of X satisfy µ L ≤ E [ X ] ≤ µ H .For < ǫ < : Pr[
X > (1 + ǫ ) µ H ] ≤ exp (cid:18) − ǫ µ H (cid:19) , (1)Pr[ X < (1 − ǫ ) µ L ] ≤ exp (cid:18) − ǫ µ L (cid:19) . (2)Negative association is a somewhat complicated-looking property but the property holds insimple scenarios. In particular it holds for balls and bins experiments [5, 4], such as when therandom variables X , . . . , X n correspond to sampling k elements out of n (i.e., when the randomvariables satisfy Pr[ X i = v i , ∀ i ∈ [ n ]] = 1 / (cid:0) nk (cid:1) for all v ∈ { , } n , k v k = k ). It also encompassesthe usual setting where X , . . . , X n are independent.For ease of notation, we will use the shorthand [ a, b ] k to denote the interval [ a · k, b · k ], [ a..b ]to denote the set { a, . . . , b } , and [ k ] to denote the set { , . . . , k } .Throughout the paper we describe algorithms that try an increasing number of colors ina single round. This increase is much faster than exponential and we use Knuth’s up-arrownotation to denote it. In fact, the increase is as fast as the inverse of log ∗ , which already givesa sense of why our algorithms run in O (log ∗ n ) rounds. Definition 2.3 (Knuth’s up-arrow notation for tetration) . For a ∈ R , b ∈ N , a ↑↑ b representsthe tetration or iterated exponentiation of a by b , defined as: a ↑↑ b = ( if b = 0 a a ↑↑ ( b − otherwise Throughout the paper, as we work on a graph G ( V, E ) of vertices V and edges E , we denoteby n the number of vertices and by ∆ the maximum degree of the graph. The degree of a5ertex is denoted by d ( v ), its uncolored degree (how many of its neighbors are uncolored) by d ∗ ( v ). The sparsity of v (Definition 2.4) is denoted by ζ ( v ), the palette of v (the set of colorsnot yet used by one of v ’s neighbors) by ψ v , and its slack s ( v ) is defined as s ( v ) = | ψ v | − d ∗ ( v ).Whenever we consider an edge-coloring problem, we will often work on the line graph and addan L subscript to indicate that we consider the same quantities but on L ( G ): the maximumdegree of this graph is ∆ L = 2∆ −
1, the degree of an edge is denoted by d L ( e ), and so on. Definition 2.4 (Sparsity) . Let v be a node in the graph G ( V, E ) of maximum degree ∆ , and let E [ N ( v )] the set of edges between nodes of v ’s neighborhood N ( v ) . The sparsity of v is definedas: ζ ( v ) = 1∆ · (cid:18)(cid:18) ∆2 (cid:19) − | E [ N ( v )] | (cid:19) The sparsity is a measure of how many edges are missing out of all the edges that could existin the neighborhood of a node. As immediate property, ζ ( v ) is a rational number in the range[0 , (∆ − / v ’s neighbors form a clique of ∆ nodes) while a value close to (∆ − / v ’s neighborhood is sparse (a value of (∆ − / v are connected to one another). A graph is said to be (1 − ǫ ) -locally sparse iff its verticesare all of sparsity at least ǫ ∆. A vertex v of sparsity ζ is equivalently said to be ζ -sparse.Sparsity is of interest here for two reasons: first, because we know from a result of [6] thatnodes receive slack proportional to their sparsity w.h.p. in just one round of all nodes trying arandom color if ζ ( v ) ∈ Ω(log n ) (Proposition 2.5), and second because the line graph is sparseby construction (Proposition 2.6), and therefore generating slack in it follows directly fromProposition 2.5. Proposition 2.5 ([6], Lemma 3.1) . Let v be a vertex of sparsity ζ and let Z be the slack of v after trying a single random color. Then, Pr[ Z ≤ ζ/ (4 e )] ≤ e − Ω( ζ ) . Proposition 2.6.
A node e of the line graph L ( G ) (i.e., an edge of G ) has degree d L ( e ) atmost ∆ L = 2(∆ − , and the number of edges in its neighborhood E L ( G ) [ N ( e ) \ { e } ] is at most (∆ − , meaning e is (∆ − / -sparse, i.e., (∆ L − / -sparse. We now introduce the tool that will allow us to sample and communicate Θ(log n ) colors in O (1) CONGEST rounds with the right probabilistic guarantees. Let s be the number of elements wesample and k the size of the universe to be sampled from. If our goal was to be able to sample allrandom subsets of [ k ] of size s , we would need log (cid:0) ks (cid:1) bits to communicate our choice of subset.But our goal is to communicate less than this amount, so we instead consider a family of s -sizedsubsets of [ k ] such that picking one of those subsets at random has some of the probabilisticproperties of sampling an s -sized subset of [ k ] uniformly at random. The family is much smallerthat the set of all possible s -sized subsets of [ k ], which allows us to communicate a member ofit in much less than log (cid:0) ks (cid:1) bits. We call the family of subsets a representative family , made of representative sets , and the probabilistic properties we maintain are essentially that: • Every element of [ k ] is present in about the same number of sets.6 For any large enough subset T of [ k ], a random representative set intersects T in aboutthe same number of elements as a fully random s -sized set would.Crucially, the second property holds for a large enough arbitrary T , so we will be able to applyit even as T is dependent on the choices of other nodes in the graph as long as the representativeset is picked independently from T . T will typically be the palette of a node or edge, or the set ofcolors not tried by any neighbors of a node or edge. Being able to just maintain the two propertiesabove is enough to efficiently adapt many LOCAL algorithms that rely on communicating largesubsets of colors to the
CONGEST setting.
Definition 3.1 (Representative sets) . Let U be a universe of size k . A family F = { S , . . . , S t } of s -sized sets is said to be an ( α, δ, ν ) -representative family iff: ∀ T ⊆ U, | T | ≥ δk : Pr i ∈ [ t ] (cid:20) | S i ∩ T || S i | ∈ [1 − α, α ] | T | k (cid:21) ≥ (1 − ν ) , (3) ∀ T ⊆ U, | T | < δk : Pr i ∈ [ t ] (cid:20) | S i ∩ T || S i | ≤ (1 + α ) δ (cid:21) ≥ (1 − ν ) , (4) ∀ u ∈ U : Pr i ∈ [ t ] [ u ∈ S i ] ∈ [1 − α, α ] s · tk . (5)We show in Lemma 3.2 that such families exist for some appropriate choices of parameters.The proof of this result, which relies on the probabilistic method, takes direct inspiration fromNewman’s Theorem [19]. Lemma 3.2 (Representative sets exist) . Let U be a universe of size k . For any α, δ, ν > ,there exists an ( α, δ, ν ) -representative family ( S i ) i ∈ [ t ] of t ∈ O ( k/ν + k log( k )) subsets, each ofsize s ∈ O ( α − δ − log(1 /ν )) .Proof. Our proof is probabilistic: we show that Equations 3, 4 and 5 all hold with non-zeroprobability when picking sets at random. We first study the probability that Equations 3 and 4hold, and then the probability that Equation 5 holds.Consider any set T ⊆ U of size ≥ δk . Pick a random set S ⊆ U of size s . The intersectionof S and T has expected size E S [ | S ∩ T | ] = | T | k s . Let us say that S has an unusual intersectionwith T if its size is outside the [1 − α, α ] | T | k s range. By Chernoff with negative dependence,Pr S (cid:20) | S ∩ T | 6∈ [1 − α, α ] | T | k s (cid:21) ≤ e − sα | T | k ≤ e − α δ s . This last quantity also bounds the probability that | S ∩ T | > (1 + α ) δs when | T | < δk , whichwe also consider as an unusual intersection.Pick t sets S , . . . , S t of size s at random independently from each other, let X i be theevent that the i th set S i unusually intersects T . By Chernoff, the probability that more than4 t · exp (cid:16) − α δ s (cid:17) of the sets unusually intersect T is:Pr S ...S t "X i X i > t · e − α δ s ≤ e − t · exp (cid:16) − α δ s (cid:17) There are less than 2 k subsets of U . Therefore, the probability that there exists a set T suchthat out of the t sampled sets S . . . S t , more than 4 t · exp (cid:16) − α δ s (cid:17) have an unusual intersection7ith T , is at most: 2 k · e − t · exp (cid:16) − α δ s (cid:17) = exp (cid:18) k · ln(2) − t · exp (cid:18) − α δ s (cid:19)(cid:19) This last quantity is an upper bound on the probability that one of Equations 3 and 4 doesnot hold. Let us now similarly bound the probability that Equation 5 does not hold.For any u ∈ U , the probability that a random s -sized subset of U contains u is s/k . Let X i be the event that our i th random set S i contains u , we have:Pr S ...S t "X i X i [1 − α, α ] s · tk ≤ e − α s · t k Therefore the probability that Equation 5 does not hold, i.e., that there exists an under- orover-represented element u ∈ U in our t randomly picked sets, is less than 2 k · e − α s · t k . Theprobability that one of Equations 3, 4, and 5 does not hold is at most:exp (cid:18) k · ln(2) − t · exp (cid:18) − α δ s (cid:19)(cid:19) + exp (cid:18) ln(2 k ) − α s · t k (cid:19) We now pick the right values for s and t such that: first, this last probability is less than 1and, therefore, a family with all the above properties exist; second, the fraction of sets S i withthe wrong intersection is less than ν for all T .The fraction of bad sets is guaranteed to be less than ν if 4 · e − α δ s ≤ ν , which is achieved with s ≥ ln(4 /ν ) · α δ . We take s to be this last value rounded up, i.e., we have s ∈ O ( α − δ − log(1 /ν )).For t , we pick it satisfying t > k · ln(2) + 1) · exp (cid:16) α δ s (cid:17) and t > k · (ln(2 k )+1) α · s , that is, we canpick t of order Θ ( k/ν + k log( k )) and satisfy all properties with non-zero probability, implyingthe existence of the desired representative family. (1 + ǫ )∆ -vertex coloring For ease of exposition, we start by applying our techniques in a relatively simple setting beforemoving on to more complex ones. As many elements are similar between the different settingswe only need to gradually make minor adjustments as we deal with more difficult problems. Thefirst setting we consider is the (1 + ǫ )∆-vertex coloring problem. Our main result in this sectionis Theorem 4.1: Theorem 4.1.
Suppose ∆ ∈ Ω(log / log ∗ n n ) . There is a CONGEST algorithm that solves the (1 + ǫ )∆ -vertex coloring problem w.h.p. in O (log ∗ n ) rounds. Throughout this section, let us assume that all nodes know a common representative family( S i ) i ∈ [ t ] with parameters α = 1 / δ = ǫ ǫ ) , and ν = n − over the color space U = [(1 + ǫ )∆].The nodes may, for example, all compute the lexicographically first ( α, δ, ν )-representative familyover U guaranteed by Lemma 3.2, with t ∈ O (∆ · n ) and s ∈ O (log n ), at the very beginning ofthe algorithm.We leverage this representative family in a procedure we call MultiTrials , where nodes cantry up to Θ(log n ) colors in a round. The trade-off is that the colors they try are not fully randombut picked from a representative set. We show that this does not matter in this application.Using MultiTrials with an increasing number of colors, we immediately get an O (log ∗ n )algorithm for the (1 + ǫ )∆-coloring problem (Algorithm 2).8 lgorithm 1 Procedure
MultiTrials ( x ) (vertex coloring version)1. v picks i v ∈ [ t ] uniformly at random and chooses a subset X v of x colors uniformly atrandom in S i v ∩ ψ v . These are the colors v tries. v describes X v to its neighbors in O (1)rounds by sending i v and ( δ [ c ∈ X v ] ) c ∈ S iv in log( t ) + s ∈ O (log n ) bits.2. If v tried a color that none of its neighbors tried, v adopts one such color and informs itsneighbors of it. Algorithm 2
Algorithm for (1 + ǫ )∆-vertex coloring (large ∆)1. Nodes compute a common ( α, δ, ν )-representative family over [(1 + ǫ )∆] guaranteed byLemma 3.2.2. For i ∈ [0 .. log ∗ n ], for O (1) rounds, each uncolored node runs MultiTrials (2 ↑↑ i ).3. For i ∈ [0 .. log ∗ n ], each uncolored node runs MultiTrials (cid:16) ǫ ∆ · log i/ log ∗ n n ǫ ) C c log n (cid:17) O (1) times.To show that Algorithm 2 works, we first show that MultiTrials , under the right circum-stances, is very efficient at coloring nodes (Lemma 4.2). In fact, given the right ratio betweenslack and uncolored degree, as the nodes try multiple colors, they get colored as if each colortried succeeded independently with constant probability.
Lemma 4.2.
Suppose a node v has slack s ( v ) ≥ ǫ ∆ and d ∗ ( v ) uncolored neighbors. Suppose x ≤ ǫ ǫ ) ∆ . If x ≤ s ( v ) / d ∗ ( v ) , then conditioned on an event of high probability ≥ − ν , anexecution of MultiTrials ( x ) colors v with probability at least − − x/ , even conditioned onany particular combination of random choices from the other nodes.Proof. Consider the representative set S i v randomly picked by v in the commonly known repre-sentative family of parameters α = 1 / δ = ǫ ǫ ) , and ν = n − . We know that S i v intersectsany set of colors T ⊆ [(1+ ǫ )∆] of size at least δ (1+ ǫ )∆ in [1 / , / | T | (1+ ǫ )∆ | S i v | ≥ δ | S i v | positionsw.h.p.Let us apply this with ψ v , the set of colors not currently used by neighbors of v , and T good ,the set of colors that are neither already used nor tried in this round by nodes adjacent to v .Clearly, T good ⊆ ψ v , | ψ v | = s ( v ) + d ∗ ( v ), and | T good | ≥ s ( v ) + d ∗ ( v ) − x · d ∗ ( v ) ≥ ( s ( v ) + d ∗ ( v )) / | ψ v | /
2. Both sets are of size at least δ (1 + ǫ )∆, therefore w.h.p. | S i v ∩ T good | ≥ | S i v | · | T good | (1+ ǫ )∆ ≥ | S i v | · | ψ v | (1+ ǫ )∆ ≥ | S i v ∩ ψ v | .Therefore, assuming that the above holds and that there are at least x colors in S i v ∩ ψ v ,when v picks x random colors in S i v ∩ ψ v , the colors picked each have a chance at least 1 / T good . The probability that none of them succeeds is at most (5 / x ≤ − x/ . The eventthat S i v does not have an intersections of unusual size with either ψ v or T good has probabilityat least 1 − ν .The second part of the argument consists of showing that the ratio of slack to uncolored degreeincreases as Algorithm 2 uses MultiTrials with an increasing number of colors. Lemma 4.3helps guarantee that the repeated use of
MultiTrials leaves all uncolored nodes with anuncolored degree at most C c log n for some constant C c .9 emma 4.3. Suppose the nodes all satisfy d ∗ ( v ) ≤ s ( v ) / (2 · ↑↑ i ) , with s ( v ) / (2 · ↑↑ i ) ≥ C c log n .Then after O (1) rounds of MultiTrials (2 ↑↑ i ) , w.h.p., they all satisfy d ∗ ( v ) ≤ max( s ( v ) / (2 · ↑↑ ( i + 1)) , C c log n ) .Proof. Let v be a node of uncolored degree at least C c log n (if not, it already satisfies the desiredend property).By Lemma 4.2, each uncolored neighbor of v stays uncolored with probability at most2 − (2 ↑↑ i ) / . By a Chernoff bound, C c being large enough, at most 2 / · − (2 ↑↑ i ) / · d ∗ ( v ) neighborsof v stay uncolored w.h.p.Let us repeat this process for 4 rounds. If at any point the uncolored degree drops below C c log n , we reached the desired property, and the argument is over. Otherwise, we can applythe Chernoff bound for all 4 rounds and get that at most 2 · − (2 ↑↑ i ) · d ∗ ( v ) = 2 · ↑↑ ( i +1) · d ∗ ( v )neighbors of v stay uncolored, so the new uncolored degree of v satisfies: d ∗ ( v ) ≤ · ↑↑ ( i + 1) · s ( v )2 · ↑↑ i ≤ s ( v )2 · ↑↑ ( i + 1) , which completes the proof. Lemma 4.4.
Suppose the nodes all satisfy d ∗ ( v ) ≤ C c log − i/ log ∗ n n . Then after O (1) roundsof MultiTrials (cid:16) ǫ ∆ · log i/ log ∗ n n ǫ ) C c log n (cid:17) , w.h.p., they all satisfy d ∗ ( v ) ≤ C c log − ( i +1) / log ∗ n n .Proof. Let x = ǫ ∆ · log i/ log ∗ n n ǫ ) C c log n denote the number of colors tried in our application of MultiTrials .For each uncolored node v we have x ≤ s ( v ) / d ∗ ( v ). By Lemma 4.2, conditioned on a high prob-ability event, each uncolored node stays uncolored with probability at most 2 − x/ , regardless ofthe random choices of other nodes. We set q = C c log − ( i +1) / log ∗ n n . Since ∆ ≥ log / log ∗ n n and x ≥ ǫ ǫ ) C c log ( i +1) / log ∗ n n , we have q · x ∈ Ω(log n ).Consider q neighbors of a node v , Θ(1) runs of MultiTrials ( x ) leave them all uncoloredwith probability at most 2 − Ω( q · x ) . The probability that a set of q neighbors stays uncolored isbounded by d ∗ ( v ) q · − Ω( q · x ) = 2 − Ω( q · ( x − log log n )) = 2 − Ω(log n ) . So, w.h.p., less than q neighborsof v stay uncolored.With Lemmas 4.2 to 4.4 proved, we only need a few additional arguments to complete theproof of Theorem 4.1. Proof of Theorem 4.1.
Step 2 of Algorithm 2 with i = 0 creates a situation where the hypothesesof Lemma 4.3 hold for i = 1. The repeated application of Lemma 4.3 guarantees that, w.h.p.,all nodes are either colored or have uncolored degree ≤ C c log n .In Step 3, all nodes start with uncolored degree at most C c log n and slack at least ǫ ∆, thusfitting the hypotheses of Lemma 4.4. Its repeated application yields that after the first log ∗ n − n ) colors in each runof MultiTrials , which colors all remaining nodes w.h.p.
Lower ∆ and concluding remarks When ∆ ∈ O (log / log ∗ n n ), a simple use of the shat-tering technique [1] together with the recent deterministic algorithm of [9] (using O (log C log n )rounds with O (log C ) bits to compute a degree+1 list-coloring of a n -vertex graph whose listsare subsets of [ C ]) is enough to solve the problem in O (log log n ) CONGEST rounds, which com-bined with our previous O (log ∗ ( n )) algorithm for ∆ ∈ O (log / log ∗ n n ) means there exists analgorithm for all ∆ that solves the (1 + ǫ )∆ coloring problem in O (log log n ) CONGEST roundsw.h.p. 10 heorem 4.5.
There is a
CONGEST algorithm that solves the (1 + ǫ )∆ -vertex coloring problemin O (log log n ) rounds w.h.p. Theorems 4.1 and 4.5 also hold if instead of having a palette with ǫ ∆ more colors than verticeshave neighbors, thus having slack from the start, we are instead trying to color a (1 − ǫ )-locallysparse graph with (∆ + 1) colors. In this case, nodes try a single random color at the very startof the algorithm to generate slack through Proposition 2.5. Moving on to the more complicated setting of edge-coloring, we will see that most of what weproved in the previous section is easily adapted to the edge-coloring setting. We first convert the(1 + ǫ )∆-vertex coloring result to a (2 + ǫ )∆-edge coloring and then indicate how the number ofcolors can be reduced to (2∆ − ǫ )∆-edge coloring, for ∆ ∈ O (log / log ∗ n n ). (2 + ǫ )∆ -edge coloring Theorem 5.1.
Suppose ∆ ∈ O (log / log ∗ n n ) . There is a CONGEST algorithm that solves the (2 + ǫ )∆ -edge coloring problem w.h.p. in O (log ∗ n ) rounds. To prove Theorem 5.1, the most crucial observation is that the elements of the graph try-ing to color themselves no longer know their palette. In the edge-coloring setting, each of thetwo endpoints of an edge e only has a partial view of which colors are used by e ’s neighbors.Communicating the list of colors used at one endpoint of e to the other endpoint is impracti-cal, as it could require up to Θ(∆ log ∆) bits. To circumvent this, we introduce a procedure( PaletteSampling ) for the two endpoints of an edge e to efficiently sample colors in ψ e , thepalette of e , again using representative sets. The MultiTrials procedure is then easily adaptedto the edge-setting by making it use
PaletteSampling , and the same algorithm as the one wehad in the node setting works here, simply swapping its basic building block procedure for anedge-adapted variant.As before (but with a different color space) let us assume throughout this section that allnodes know a common representative family ( S i ) i ∈ [ t ] with parameters α = 1 / δ = ǫ ǫ ) , and ν = n − over the color space U = [(2 + ǫ )∆].For each edge e , let us denote by v e and v ′ e its two endpoints, with v e the one with the highestID of the two. Let us denote by ψ e the palette of e , the set of colors unused by e ’s neighboringedges, and for a node u let ψ u be the set of colors unused by edges around u . For an uncolorededge e , ψ e = ψ v e ∩ ψ v ′ e . Algorithm 3
Procedure
PaletteSampling (edge-coloring version)1. v e picks i e ∈ [ t ] uniformly at random and sends i e to v ′ e in O (log( t ) / log( n )) = O (1) rounds.2. v ′ e replies with s bits describing S i e ∩ ψ v ′ e in O (1) rounds.3. v e sends s bits to v ′ e describing S i e ∩ ψ v e in O (1) rounds. Proposition 5.2.
Suppose e ’s palette ψ e satisfies | ψ e | ≥ δ · (2 + ǫ )∆ . Then v e and v ′ e find [1 − α, α ] · s · | ψ e | / (2 + ǫ )∆ colors in e ’s palette in an execution of PaletteSampling w.h.p. roof. The result follows directly from Equation 3 in the definition of representative sets (Definition 3.1).
PaletteSampling leverages that while it requires quite a bit of communication for anendpoint of an edge to learn which colors are used at the other endpoint, sending a random colorfor the other endpoint to reject or approve is quite communication-efficient. The representativesets and the slack at the edges’ disposal further allow us to sample not just Θ(log n/ log ∆)colors (represented in log ∆ bits each) in O (1) rounds but Θ(log n ) colors by sampling pseudo-independent colors. Algorithm 4
Procedure
MultiTrials ( x ) (edge-coloring version)1. v e and v ′ e execute PaletteSampling . Let S i e be the randomly picked representative set.2. v e picks a subset X e of x colors uniformly at random in S i e ∩ ψ e and sends s bits to v ′ e todescribe it. These are the colors e tries.At this point, each node u knows which colors are tried by all its incident edges.3. Each v e describes to v ′ e which of the x colors tried by e were not tried by any other edgeadjacent to v e in O (1) rounds, and reciprocally.4. If e tried a color that no edge adjacent to e tried, v e picks an arbitrary such color, sendsit to v ′ e , and e adopts this color.An execution of MultiTrials maintains the invariant that each node knows which colors areused by edges incident to it. As before, the representative sets guarantee that for any uncolorededge e , whatever colors other edges adjacent to e are trying, the chosen representative set S i e has a large intersection with the set of unused and untried colors, as long as this set representsa constant fraction of the color space (which slack and a good choice of x guarantee). Algorithm 5
Algorithm for (2 + ǫ )∆-edge coloring (large ∆)1. Nodes send their ID to their neighbors.2. Nodes compute a common ( α, δ, ν )-representative family over [(2 + ǫ )∆].3. For i ∈ [0 .. log ∗ n ], for O (1) rounds, each uncolored edge runs MultiTrials (2 ↑↑ i ),4. For i ∈ [0 .. log ∗ n ], for O (1) rounds, each uncolored edge runs MultiTri-als (cid:16) ǫ ∆ · log i/ log ∗ n n ǫ ) C c log n (cid:17) .Algorithm 5 is exactly the same algorithm as Algorithm 2 in which we have swapped thenode version of MultiTrials for its edge-variant, which makes for a straightforward proof.
Proof of Theorem 5.1.
The procedure
MultiTrials adapted to the edge-setting has the sameproperties as the
MultiTrials procedure we analyzed in the vertex coloring setting. Moreprecisely, Lemmas 4.2 and 4.3 still hold (with the line graph L ( G ) instead of G and edgesinstead of nodes), and we can simply refer to the Proof of Theorem 4.1 for the details of howall edges get colored w.h.p. by Algorithm 5. 12 .2 (2∆ − -edge coloring The algorithms we just gave for (2 + ǫ )∆-edge coloring are easily adapted to the (2∆ − Theorem 5.3.
There is a
CONGEST algorithm that solves the (2∆ − -edge coloring problemw.h.p. in O (log log n ) rounds. When ∆ = Ω(log / log ∗ n n ) , the time complexity is O (log ∗ n ) . What remains is to handle the small-degree case.
Algorithm for small ∆ Obtaining an O (poly(log log n )-round CONGEST algorithm when∆ ∈ O (log / log ∗ n n ) requires some care in the edge-setting. We sketch how to get an O (log log n ) algorithm here, by first running our O (log ∗ n ) algorithm to reduce the uncoloreddegree to O (log n ), and then shattering the graph and simulating the deterministic algorithmof [9] for completing a vertex coloring of the line graph (Lemma 5.4).Standard shattering usually assumes that the nodes of the graph (edges, nodes of the linegraph in our case) try random colors in their palettes. In the edge-setting, this is clearlyproblematic as, again, each of the endpoints of an edge only partially know the palette of saidedge. Fortunately, shattering is still possible if the nodes can repeatedly use a procedure thatcolors them with an Ω(1) probability of success that is independent of the random events thatoccur at a distance at least d for some constant d (see, e.g., Lemma 3.13 in [11], where theshattering technique is similarly adapted to the distance-2 setting in which, as in the edge-setting, the nodes do not know their palette). Lemma 5.4.
The deterministic algorithm of [9] for completing a vertex coloring can be sim-ulated with a O (log log n ) overhead in the edge-setting on O (poly log n ) -sized components of O (poly log n ) maximum degree and O (log n ) live degree.Proof. Two key properties here: • since ∆ ∈ O (poly(log n )), a color fits in only O (log log n ) bits. • since the live degree after shattering is at most O (log n ), an edge only needs to receive O (log n log log n ) bits to receive one color from each of its neighbors, which only takes O (log log n ) rounds.Before actually running the algorithm of [9], the edges need to learn more colors in theirpalette than they have neighbors. This is possible in O (log log n ) rounds. Indeed: • If the edges have maximum degree ∆ at most C ′ log n , they may simply learn all the colorsused by their neighbors in O (log log n ) rounds by simple transmission, • If the edges have maximum degree greater than C ′ log n with C ′ a large enough universalconstant, they all have sufficient slack to learn more than C log n > d ∗ L ( e ) colors of theirpalette in O (1) executions of PaletteSampling .The algorithm of [9] consists of O (log N ) iterations of a O (log C ) procedure using messagesof size O (log C ), where N is the number of nodes in the graph ( O (poly log n ) in the case of ourconnected components) and C is the size of the color space ((2∆ − ∈ O (log / log ∗ n n ) inour case). We simulate this algorithm on the line graph using that O (log n ) messages of size O (log C ) can be sent on an edge in O (log C ) ⊆ O (log log n ) CONGEST rounds.13 .3 (1 + ǫ )∆ -edge coloring Dubhashi, Grable, and Panconesi [3] gave an algorithm for (1 + ǫ )∆-edge coloring running in O (log n ) rounds of LOCAL . Their algorithm has two phases. In the first phase, subsets of edgestry random colors from their palette. After the first phase, the maximum (uncolored) degree ofa node is at most ǫ ∆ /
2. The second phase then applies a (2∆ − ǫ )∆.The first phase runs in O (1) rounds. In [3], the algorithm used in the second phase runs in O (log n ) rounds, and hence the time bound of their full algorithm. By using the O (log ∗ ∆)-roundalgorithm explained earlier, the total time complexity is reduced to O (log ∗ ∆). What remainsis to explain how to implement the first phase in the CONGEST model.The first phase consists of t ǫ = O (1) iterations, where iteration i consists of the followingsteps: Each vertex u randomly selects an ǫ/ e chooses independentlyat random a tentative color t ( e ) from its palette (of currently available colors). An edge isassigned its tentative color if no adjacent edge also chose the same tentative color, and thepalettes of the edges are updated accordingly.The only difference in the random selection of the first phase is that only a subset of theedges pick tentative colors. This is easily performed identically in CONGEST . The only issueis then how to pick a random color from within the current palette of the edge. We show herehow to achieve this approximately using representative sets.
Proposition 5.5.
Let an edge e have a palette ψ e of size | ψ e | ≥ δ (1 + ǫ )∆ . Then in anexecution of PaletteSampling followed by the edge trying a single color in the sampled palette,conditioned on an event occurring w.h.p., each color of ψ e gets sampled with a probability in h − α α , α − α i · | ψ e | .Proof. Let us consider c ∈ ψ e , a color in e ’s palette. By Equation 5, the probability that c is in the random representative set S i e used in PaletteSampling is between − α (1+ ǫ )∆ s and α (1+ ǫ )∆ s . Conditioned on c ∈ S i e , the probability that c is the color that gets picked amongthe sampled palette colors is 1 / | S i e ∩ ψ e | . When | ψ e | ≥ δ (1 + ǫ )∆, with probability ≥ − ν , | S i e ∩ ψ e | ∈ [1 − α, α ] | ψ e | (1+ ǫ )∆ s . So conditioned on | S i e ∩ ψ e | being of the expected order ofmagnitude, c gets sampled with probability between − α α · | ψ e | and α − α · | ψ e | .Theorem 5 of [3] shows that the palette sizes and degree of nodes are highly concentratedafter each iteration. In particular, each edge has palette of size ∼ (1 − p ǫ ) i ∆, where p ǫ is afunction of ǫ alone. Thus, in each iteration of phase I, the current palette of each vertex is aconstant fraction of [∆], and hence Proposition 5.5 applies. Our sampling technique yields a few other interesting results.The first results are based on the observations that with slack of ∆ log ( c ) n (where log ( c ) n isthe c -iterated logarithm), it suffices to run MultiTrial for O ( c ) rounds to reduce the uncoloreddegree to O (log n ), and that when ∆ = Ω(log /c ′ n ), if suffices to run MultiTrial for O ( c ′ )rounds to color all remaining nodes in the last phase of our algorithms.14egree Tasks Complexity in CONGEST ∆ = Ω(log / log ∗ n n ) (1 + ǫ )∆-vertex coloring O (log ∗ n )(1 + ǫ )∆-edge coloring∆ = O (log / log ∗ n n ) (1 + ǫ )∆-vertex coloring O (log log n )(2∆ − O (log log n )∆ = Ω( q log / log ∗ n n ) (1 + ǫ )∆ -vertex distance-2 coloring O (log ∗ n )∆ = O ( q log / log ∗ n n ) O (log log n )∆ = Ω(log /c ′ n ) ∆ log ( c ) -vertex coloring O (1)∆ log ( c ) -edge coloring∆ = Ω( q log /c ′ n ) ∆ log ( c ) n -vertex distance-2 coloringTable 1: Summary of our results. log ( c ) is the c -iterated logarithm, c and c ′ are constants. Notethat an algorithm using (1 + ǫ )∆ colors implies one using (2∆ − ǫ )∆. The vertex coloring results for large ∆ imply equivalent results with less colors onlocally sparse graphs through Proposition 2.5. Theorem 6.1. O (∆ log n ) -vertex coloring can be done in a single CONGEST round, w.h.p. (∆ log ( c ) n ) -vertex coloring can be done in O (1) CONGEST rounds, w.h.p., for any constants c and c ′ , when ∆ = Ω(log /c ′ n ) . The
PaletteSampling and
MultiTrials procedures are easily adapted to the distance-2setting, which immediately yields analogue results in this setting.
Theorem 6.2.
Distance-2 coloring with ∆ log ( c ) n colors can be done in O (1) CONGEST rounds, for any constants c and c ′ , when ∆ = Ω( q log /c ′ n ) , w.h.p. Distance-2 coloringa graph G with ∆ + 1 colors where G is (1 − ǫ ) -locally sparse, can be achieved in O (log ∗ n ) rounds, w.h.p., for ∆ = Ω( q log / log ∗ n n ) . Indeed, let ψ kv denote the set of colors unused in v ’s distance- k neighborhood. Palette-Sampling can be done by having each v send S i v and receive S i v ∩ ψ u to and from each directneighbor u , from which v computes S i v ∩ ψ v = T u ∈ N ( v ) S i v ∩ ψ u . MultiTrials is similarlyeasily adapted following the same principle.
Our proof of Lemma 3.2 – the existence of representative sets with appropriate parameters –is non-constructive, and a natural question is whether we could find an explicit constructionwith similar parameters. We partially answer this question by remarking that an averagingsampler essentially has all the properties we want, bar one, and known explicit constructionsbased on expander graphs give the right guarantees (see Theorem 1.3 in [15]). The output ofan averaging sampler is not a set but a multiset (i.e., some elements might appear more thanonce), but it satisfies properties 3 and 4 of our definition of representative sets (Definition 3.1),which implies that most of results may be obtained with an explicit construction. The notableexception is our result for (1 + ǫ )∆-edge coloring, which relies on the uniformity of sampling15ingle elements through representative sets (property 5) which does not seem to immediatelyhold for the explicit construction mentioned here. A weaker analogue of this property maybe proved using the Hitting property of expander walks (Theorem 4.17 in [25]), but how toconstruct an explicit family of representative sets with the exact properties of Definition 3.1 andLemma 3.2 is an open question. We have presented a new technique, inspired by communication complexity, for speeding up
CONGEST algorithms. We have applied it to a range of coloring problems (see Table 1 for asummary of our results), but it would be interesting to see it used more widely, possibly withextensions.We obtained a superfast algorithm in
CONGEST for (1 + ǫ )∆-edge coloring that holds when∆ = Ω(log / log ∗ n n ). It remains to be examined how to deal with smaller values of ∆, whichin LOCAL has been tackled via the Lov´asz Local Lemma [6].
References [1] L. Barenboim, M. Elkin, S. Pettie, and J. Schneider. The locality of distributed symmetrybreaking.
J. ACM , 63(3):20:1–20:45, 2016.[2] Y.-J. Chang, W. Li, and S. Pettie. Distributed (∆ + 1)-coloring via ultrafast graph shat-tering.
SIAM Journal on Computing , 49(3):497–539, 2020.[3] D. Dubhashi, D. A. Grable, and A. Panconesi. Near-optimal, distributed edge colouringvia the nibble method.
Theoretical Computer Science , 203(2):225–252, 1998.[4] D. P. Dubhashi and A. Panconesi.
Concentration of Measure for the Analysis of RandomizedAlgorithms . Cambridge University Press, 2009.[5] D. P. Dubhashi and D. Ranjan. Balls and bins: A study in negative dependence.
RandomStruct. Algorithms , 13(2):99–124, 1998.[6] M. Elkin, S. Pettie, and H.-H. Su. (2∆ − Proc. ACM-SIAM Symp. on Discrete Algorithms(SODA) , pages 355–370, 2015.[7] M. Fischer, M. Ghaffari, and F. Kuhn. Deterministic distributed edge-coloring via hy-pergraph maximal matching. In
Proc. IEEE Symp. on Foundations of Computer Science(FOCS) , pages 180–191, 2017.[8] M. Ghaffari, C. Grunau, and V. Rozhoˇn. Improved deterministic network decomposition.In
Proc. ACM-SIAM Symp. on Discrete Algorithms (SODA) , 2021.[9] M. Ghaffari and F. Kuhn. Deterministic distributed vertex coloring: Simpler, faster, andwithout network decomposition.
CoRR , abs/2011.04511, 2020.[10] D. A. Grable and A. Panconesi. Nearly optimal distributed edge coloring in O (log log n )rounds. Random Structures & Algorithms , 10(3):385–405, 1997.[11] M. M. Halld´orsson, F. Kuhn, Y. Maus, and A. Nolin. Coloring fast without learning yourneighbors’ colors.
CoRR , abs/2008.04303, 2020. (full version of [12]).1612] M. M. Halld´orsson, F. Kuhn, Y. Maus, and A. Nolin. Coloring fast without learning yourneighbors’ colors. In
Proc. Int. Symp. on Distributed Computing (DISC) , pages 39:1–39:17,2020.[13] D. G. Harris, J. Schneider, and H.-H. Su. Distributed (∆ + 1)-coloring in sublogarithmicrounds. In
Proc. ACM SIGACT Symp. on Theory of Computing (STOC) , pages 465–478,2016.[14] J. H˚astad and A. Wigderson. The randomized communication complexity of set disjointness.
Theory of Computing , 3(11):211–219, 2007.[15] A. Healy. Randomness-efficient sampling within NC1.
Computational Complexity , 17:3–37,2008.[16] ¨O. Johansson. Simple distributed ∆ + 1-coloring of graphs.
Inf. Process. Lett. , 70(5):229–232, 1999.[17] E. Kushilevitz and N. Nisan.
Communication complexity . Cambridge University Press,1997.[18] N. Linial. Locality in distributed graph algorithms.
SIAM Journal on Computing ,21(1):193–201, 1992.[19] I. Newman. Private vs. common random bits in communication complexity.
Inf. Process.Lett. , 39(2):67–71, 1991.[20] A. Panconesi and A. Srinivasan. Randomized distributed edge coloring via an extension ofthe Chernoff–Hoeffding bounds.
SIAM Journal on Computing , 26(2):350–368, 1997.[21] A. Rao and A. Yehudayoff.
Communication Complexity: and Applications . CambridgeUniversity Press, 2020.[22] B. A. Reed. ω , ∆, and χ . J. Graph Theory , 27(4):177–212, 1998.[23] V. Rozhoˇn and M. Ghaffari. Polylogarithmic-time deterministic network decompositionand distributed derandomization. In
Proc. ACM SIGACT Symp. on Theory of Computing(STOC) , pages 350–363, 2020.[24] J. Schneider and R. Wattenhofer. A new technique for distributed symmetry breaking. In
Proc. ACM Symp. Principles of Distributed Computing (PODC) , pages 257–266, 2010.[25] S. P. Vadhan. Pseudorandomness.