An Improved Distributed Algorithm for Maximal Independent Set
aa r X i v : . [ c s . D S ] J u l An Improved Distributed Algorithm forMaximal Independent Set
Mohsen Ghaffari
MITghaff[email protected]
Abstract
The Maximal Independent Set (MIS) problem is one of the basics in the study of locality indistributed graph algorithms. This paper presents an extremely simple randomized algorithmproviding a near-optimal local complexity for this problem, which incidentally, when combinedwith some known techniques, also leads to a near-optimal global complexity .Classical MIS algorithms of Luby [STOC’85] and Alon, Babai and Itai [JALG’86] provide the global complexity guarantee that, with high probability , all nodes terminate after O (log n ) rounds.In contrast, our initial focus is on the local complexity , and our main contribution is to provide avery simple algorithm guaranteeing that each particular node v terminates after O (log deg ( v ) +log 1 /ε ) rounds, with probability at least 1 − ε . The guarantee holds even if the randomnessoutside 2-hops neighborhood of v is determined adversarially. This degree-dependency is optimal,due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC’04].Interestingly, this local complexity smoothly transitions to a global complexity: by addingtechniques of Barenboim, Elkin, Pettie, and Schneider [FOCS’12; arXiv: 1202.1983v3], we get arandomized MIS algorithm with a high probability global complexity of O (log ∆) + 2 O ( √ log log n ) ,where ∆ denotes the maximum degree. This improves over the O (log ∆) + 2 O ( √ log log n ) result ofBarenboim et al., and gets close to the Ω(min { log ∆ , √ log n } ) lower bound of Kuhn et al.Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, orlower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model,and a faster distributed algorithm for the Lov´asz Local Lemma. As standard, we use the phrase with high probability to indicate that an event has probability at least 1 − /n . quasi nanos, gigantium humeris insidentes Introduction and Related Work
Locality sits at the heart of distributed computing theory and is studied in the medium of problemssuch as Maximal Independent Set (MIS), Maximal Matching (MM), and Coloring. Over time, MIShas been of special interest as the others reduce to it. The story can be traced back to the surveys ofValiant [Val82] and Cook [Coo83] in the early 80’s which mentioned MIS as an interesting problemin non-centralized computation, shortly after followed by (poly-)logarithmic algorithms of Karp andWigderson [KW84], Luby [Lub85], and Alon, Babai, and Itai [ABI86]. Since then, this problem hasbeen studied extensively. We refer the interested reader to [BEPSv3, Section 1.1], which provides athorough and up to date review of the state of the art.In this article, we work with the standard distributed computation model called
LOCAL [Pel00]:the network is abstracted as a graph G = ( V, E ) where | V | = n ; initially each node only knows itsneighbors; communications occur in synchronous rounds, where in each round nodes can exchangeinformation only with their graph neighbors.In the LOCAL model, besides it’s practical application, the distributed computation time-boundhas an intriguing purely graph-theoretic meaning: it identifies the radius up to which one needsto look to determine the output of each node, e.g., its color in a coloring. For instance, resultsof [Lub85, ABI86] imply that looking only at the O (log n )-hop neighborhood suffices, w.h.p. Despite the local nature of the problem, classically the main focus has been on the global complexity,i.e., the time till all nodes terminate. Moreover, somewhat strikingly, the majority of the standardanalysis also take a non-local approach: often one considers the whole graph and shows guaranteeson how the algorithm makes a global progress towards it local objectives . A prominent example isthe results of [Lub85, ABI86] where the analysis shows that per round, in expectation, half of theedges of the whole network get removed , hence leading to the global complexity guarantee that after O (log n ) rounds, with high probability, the algorithm terminates everywhere. See Section 2.This issue seemingly suggests a gap in our understanding of locality . The starting point in thispaper is to question whether this global mentality is necessary for obtaining the tight bound . Thatis, can we instead provide a tight bound using local analysis , i.e., an analysis that only looks ata node and some small neighborhood of it? To make the difference more sensible, let us imagine n → ∞ and seek time-guarantees independent of n .Of course this brings to mind locality-based lower bounds which at first glance can seem to implya negative answer: Linial [Lin92] shows that even in a simple cycle graph, MIS needs Ω(log ∗ n )rounds, and Kuhn, Moscibroda and Wattenhofer [KMWv1] prove that it requires Ω( √ log n ) roundsin some well-crafted graphs. But there is a catch: these lower bounds state that the time till all nodesterminate is at least so much. One can still ask, what if we want a time-guarantee for each singlenode instead of all nodes ? While in the deterministic case these time-guarantees, called respectively local and global complexities, are equivalent, they can differ when the guarantee that is to be given isprobabilistic, as is usual in randomized algorithms. Note that, the local complexity is quite a usefulguarantee, even on its own. For instance, the fact that in a cycle, despite Linial’s beautiful Ω(log ∗ n )lower bound, the vast majority of nodes are done within O (1) rounds is a meaningful property andshould not be ignored. To be concrete, our starting question now is: Local Complexity Question : How long does it take till each particular node v terminates,and knows whether it is in the (computed) MIS or not, with probability at least 1 − ε ? These analysis do not provide any uniformity guarantee for the removed edges. Without insisting on tightness, many straightforward (but weak) complexities can be given using local analysis. O (log ∆ + log 1 /ε )rounds for Luby’s algorithm, or O (log ∆ log log ∆ + log ∆ log 1 /ε ) rounds for the variant of Luby’sused by Barenboim, Elkin, Pettie, and Schneider [BEPSv3] and Chung, Pettie, and Su [CPS14].However, both of these bounds seem to be off from the right answer; e.g., one cannot recover fromthese the standard O (log n ) high probability global complexity bound. In the first bound, the firstterm is troublesome and in the latter, the second term becomes the bottleneck. In both, the highprobability bound becomes O (log n ) when one sets ∆ = n δ for a constant δ > O (log ∆ + log 1 /ε ). More formally, we prove that: Theorem 1.1.
There is a randomized distributed MIS algorithm for which, for each node v , theprobability that v has not made its decision after the first O (log deg ( v ) + log 1 /ε ) rounds is at most ε . Furthermore, this holds even if the bits of randomness outside the -hops neighborhood of v aredetermined adversarially. The perhaps surprising fact that the bound only depends on the degree of node v , even allowingits neighbors to have infinite degree, demonstrates the truly local nature of this algorithm. Thelogarithmic degree-dependency in the bound is optimal, following a lower bound of Kuhn, Mosci-broda and Wattenhofer [KMWv1]: As indicated by [Kuh15], with minor changes in the argumentsof [KMWv1], one can prove that there are graphs in which, the time till each node v can know if itis in MIS or not with constant probability is at least Ω(log ∆) rounds.Finally, we note that the fact that the proof has a locality of -hops —meaning that the analysisonly looks at the 2-hops neighborhood and particularly, that the guarantee relies only on the cointosses within the 2-hops neighborhood of node v —will prove vital as we move to global complexity.This might be interesting for practical purposes as well. Notice that Theorem 1.1 easily recovers the standard result that after O (log n ) rounds, w.h.p., allnodes have terminated, but now with a local analysis. In light of the Ω(min { log ∆ , √ log n } ) lowerbound of Kuhn et al. [KMWv1], it is interesting to find the best possible upper bound, specially whenlog ∆ = o (log n ). The best known bound prior to this work was O (log ∆) + 2 O ( √ log log n ) rounds, dueto Barenboim et al. [BEPSv3].The overall plan is based on the following nice and natural intuition, which was used in theMIS results of Alon et al. [ARVX12] and Barenboim et al. [BEPSv3]. We note that this generalstrategy is often attributed to Beck, as he used it first in his breakthrough algorithmic version ofthe Lov´asz Local Lemma [Bec91]. Applied to MIS, the intuition is that, when we run any of theusual randomized MIS algorithms, nodes get removed probabilistically more and more over time. Ifwe run this base algorithm for a certain number of rounds, a graph shattering type of phenomenaoccurs. That is, after a certain time, what remains of the graph is a number of “small” components,where small might be in regard to size, (weak) diameter, the maximum size of some specially definedindependent sets, or some other measure. Once the graph is shattered, one switches to a deterministicalgorithm to finish off the problem in these remaining small components.Since we are considering graphs with max degree ∆, even ignoring the troubling probabilisticdependencies (which are actually rather important), a simplistic intuition based on Galton-Watsonbranching processes tells us that the graph shattering phenomena starts to show up around thetime that the probability ε of each node being left falls below 1 / ∆ . Alon et al. [ARVX12] used In truth, the probability threshold is 1 / poly(∆), because of some unavoidable dependencies. But due to theexponential concentration, the time to reach the 1 / poly(∆) threshold is within a constant factor of that of the 1 / ∆threshold. We will also need to establish some independence, which is not discussed here. See Section 4.
2n argument of Parnas and Ron [PR07], showing that Luby’s algorithm reaches this threshold after O (∆ log ∆) rounds. Barenboim et al. [BEPSv3] used a variant of Luby’s, with a small but clevermodification, and showed that it reaches the threshold after O (log ∆) rounds. As Barenboim etal. [BEPSv3] show, after the shattering, the remaining pieces can be solved deterministically, via thehelp of known deterministic MIS algorithms (and some other ideas), in log ∆ · O ( √ log log n ) rounds.Thus, the overall complexity of [BEPSv3] is O (log ∆)+log ∆ · O ( √ log log n ) = O (log ∆)+2 O ( √ log log n ) .To improve this, instead of Luby’s, we use our new MIS algorithm as the base, which as Theo-rem 1.1 suggests, reaches the shattering threshold after only O (log ∆) rounds. This will be formalizedin Section 4. We will also use some minor modifications for the post-shattering phase to reduce it’scomplexity from log ∆ · O ( √ log log n ) to 2 O ( √ log log n ) . The overall result thus becomes: Theorem 1.2.
There is a randomized distributed MIS algorithm that terminates after O (log ∆) +2 O ( √ log log n ) rounds, with probability at least − /n . This improves the best-known bound for MIS and gets close to the Ω(min { log ∆ , √ log n } ) lowerbound of Kuhn et al. [KMWv1], which at the very least, shows that the upper bound is provablyoptimal when log ∆ ∈ [2 √ log log n , √ log n ]. Besides that, the new result matches the lower boundin a stronger and much more instructive sense: as we will discuss in point (C2) below, it per-fectly pinpoints why the current lower bound techniques cannot prove a lower bound better thanΩ(min { log ∆ , √ log n } ). Despite its extreme simplicity, the new algorithm turns out to lead to several implications, whencombined with some known results and/or techniques:(C1) Combined with the finish-off phase results of Barenboim et al. [BEPSv3], we get MIS algorithmswith complexity O (log ∆) + O (min { λ ε + log λ log log n, λ + λ ε log log n, λ + (log log n ) ε } ) forgraphs with arboricity λ . Moreover, combined with the low-arboricity to low-degree reductionof Barenboim et al. [BEPSv3], we get an MIS algorithm with complexity O (log λ + √ log n ).These bounds improve over some results of [BEPSv3], Barenboim and Elkin [BE10], and Lenzenand Wattenhofer [LW11].(C2) The new results highlight the barrier of the current lower bound techniques. In the knownlocality-based lower bound arguments, including that of [KMWv1], to establish a T -round lowerbound, it is necessary that within T rounds, each node sees only a tree. That is, each T -hopsneighborhood must induce a tree, which implies that the girth must be at least 2 T +1. Since any g -girth graph has arboricity λ ≤ O ( n g − ), from (C1), we get an O ( √ log n )-round MIS algorithmwhen g = Ω( √ log n ). More precisely, for any graph with girth g = Ω(min { log ∆ , √ log n } ), weget an O (min { log ∆+2 O ( √ log log n ) , √ log n } )-round algorithm. Hence, the Ω(min { log ∆ , √ log n } )lower bound of [KMWv1] is essentially the best-possible when the the topology seen by eachnode within the allowed time must be a tree. This means, to prove a better lower bound,one has to part with these “tree local-views” topologies. However, that gives rise to intricatechallenges and actually, to the best of our knowledge, there is no distributed locality-basedlower bound, in fact for any (local) problem, that does not rely on tree local-views .(C3) We get an O ( √ log n )-round MIS algorithm for Erd¨os-R´enyi random graphs G ( n, p ). This isbecause, if p = Ω( √ log n n ), then with high probability the graph has diameter O ( √ log n ) hops(see e.g. [CL01]) and when p = O ( √ log n n ), with high probability, ∆ = O (2 √ log n ) and thus, thealgorithm of Theorem 1.2 runs in at most O ( √ log n ) rounds.3C4) Combined with a recursive sparsification method of Bisht et al. [BKP14], we get a (2 , β )-ruling-set algorithm with complexity O ( β log /β ∆) + 2 O ( √ log log n ) , improving on the complexitiesof [BEPSv3] and [BKP14]. An ( α, β )-ruling set S is a set where each two nodes in S are atdistance at least α , and each node v ∈ V \ S has a node in S within its β -hops. So, a (2 , O ( β log /β ∆) is arguably (and even provably , as [Kuh15]indicated,) best-possible for the current method, which roughly speaking works by computingthe ruling set iteratively using β successive reductions of the degree.(C5) In the Local Computation Algorithms (LCA) model of Rubinfeld et al. [RTVX11] and Alonet al. [ARVX12], we get improved bounds for computing MIS. Namely, the best-known timeand space complexity improve from, respectively, 2 O (log ∆) log n and 2 O (log ∆) log n boundsof Levi, Rubinfeld and Yodpinyanee [LRY15] to 2 O (log ∆) log n and 2 O (log ∆) log n .(C6) We get a Weak-MIS algorithm with complexity O (log ∆), which thus improves the roundcomplexity of the distributed algorithmic version of the Lov´asz Local Lemma presented byChung, Pettie, and Su [CPS14] from O (log ep (∆+1) n · log ∆) to O (log ep (∆+1) n · log ∆). Roughlyspeaking, a Weak-MIS computation should produce an independent set S such that for eachnode v , with probability at least 1 − / poly(∆), v is either in S or has a neighbor in S .(C7) We get an O (log ∆ + log log log n )-round MIS algorithm for the CONGESTED - CLIQUE modelwhere per round, each node can send O (log n )-bits to each of the other nodes (even thosenon-adjacent to it): After running the MIS algorithm of Theorem 1.1 for O (log ∆) rounds,w.h.p., if ∆ ≥ n . , we are already done, and otherwise, as Theorem 4.2 shows, all leftovercomponents have size o ( n . ). In the latter case, using the algorithm of [HPP + leader of their component in O (log log log n ) rounds, and using Lenzen’srouting [Len13], we can make each leader learn the topology of its whole component, solve therelated MIS problem locally, and send back the answers, all in O (1) rounds. As a warm up for the MIS algorithm of the next section, here, we briefly review Luby’s algorithmand present some local analysis for it. The main purpose is to point out the challenge in (tightly)analyzing the local complexity of Luby’s, which the algorithm of the next section tries to bypass.
Luby’s Algorithm : The algorithm of [Lub85, ABI86] is as simple and clean as this:“
In each round , each node picks a random number uniformly from [0 , ; strict localminimas join the MIS, and get removed from the graph along with their neighbors. ”Note that each round of the algorithm can be easily implemented in 2 communication rounds on G ,one for exchanging the random numbers and the other for informing neighbors of newly joined MISnodes. Ignoring this 2 factor, in the sequel, by round we mean one round of the algorithm. Global Analysis : The standard method for analyzing Luby’s algorithm goes via looking at thewhole graph, i.e., using a global view . See [MR10, Section 12.3], [Pel00, Section 8.4], [Lyn96, Section4.5] for textbook treatments. We note that this is the only known way for proving that this algorithmterminates everywhere in O (log n ) rounds with high probability. The base of the analysis is to showthat per iteration, in expectation, at least half of the edges (of the whole remaining graph) getremoved. Although the initial arguments in [Lub85,ABI86] were more lengthy, Yves et al. [YRSDZ10] One can easily see that a precision of O (log ∆) bits suffices. O (log n )rounds, the algorithm terminates everywhere, with high probability. To analyze the algorithm in a local way, and to bound its local complexity, the natural idea is to saythat over time, each local neighborhood gets “simplified”. Particularly, the first-order realization ofthis intuition would be to look at the degrees and argue that they shrink with time. The followingstandard observation is the base tool in this argument:
Claim 2.1.
Consider a node u at a particular round, let d ( u ) be its degree and d max be the maximumdegree among the nodes in the inclusive neighborhood N + ( u ) of u . The probability that u is removedin this round is at least d ( u )+1 d ( u )+ d max .Proof. Let u ∗ be the node in N + ( u ) that draws the smallest random number. If u ∗ actually hasthe smallest in its own neighborhood, then it will join MIS which means u gets removed. Sinceall numbers are iid random variables, and as u ∗ is the smallest number of d ( u ) + 1 of them, theprobability that it is the smallest both in its own neighborhood and the neighborhood of u is at least d ( u )+1 d ( u )+ d max . This is because, the latter is a set of size at most d ( u ) + d max .From the claim, we get that if the degree of a node u is at least half of that of the max of itsneighbors, then in one round, with probability at least 1 / u gets removed. Thus, in α = O (1)rounds from the start, either u is removed or its degree falls below ∆ /
2, with probability at least1 /
2. We would like to continue this argument and say that every O (1) rounds, u ’s degree shrinksby another 2 factor, thus getting a bound of O (log ∆). However, this is not straightforward as u ’sdegree drops might get delayed because of delays in the degree drops of u ’s neighbors. The issueseems rather severe as the degree drops of different nodes can be positively correlated.Next, we explain a simple argument giving a weak but still local complexity of O (log . ∆ +log ∆ log 1 /ε ) rounds: For the purpose of this paragraph, let us say a removed node has degree0. From above, we get that after 10 α log . ∆ rounds, the probability that u still has degree atleast ∆ / −
10 log . ∆ . Thus, using a union bound, we can say that with probability atleast 1 − (∆ + 1)2 −
10 log . ∆ , after 10 α log . ∆ rounds, u and all its neighbors have degree at most∆ /
2. Hence, with probability at least 1 − (∆ + 2)2 −
10 log . ∆ , after 20 α log . ∆ rounds, node u hasanother drop and its degree is at most ∆ /
4. Continuing this argument pattern recursively for log . ∆iterations, we get that with probability at least 1 − (∆ + 2) log . ∆ · −
10 log . ∆ ≥ − − . ∆ ,after 10 α log ∆ rounds, node u ’s degree has dropped to ∆ / log . ∆ . Now, we can repeat a similarargument, but in blocks of 10 α log ∆ rounds, and each time expecting a degree drop of 2 log . ∆ factor.We will be able to afford to continue this for log . ∆ iterations and say that, after 10 α log . ∆ rounds,with probability at least 1 − (∆ + 2) log . ∆ · − . ∆ ≥ − − log . ∆ , the degree of u has droppedto 1 /
2. Since a degree less than 1 / v is removed, weget that v is removed after at most O (log . ∆) rounds with probability at least 1 − − Ω(log . ∆) . Asimple repetition argument proves that this generalizes to show that after O (log . ∆ + log ∆ log 1 /ε )rounds, node u is removed with probability at least 1 − ε .In the full version of this paper, we will present a stronger (but also much more complex) argumentwhich proves a local complexity of O (log ∆ + log 1 /ε ) for the same algorithm. This bound has thedesirable additive log 1 /ε dependency on ε but it is still far from the best possible bound, due to thefirst term. 5 .2 Local Analysis: Take 2 Here, we briefly explain the modification of Luby’s algorithm that Barenboim et al. [BEPSv3] use.The key is the following clever idea: they manually circumvent the problem of nodes having a lag intheir degree drops, that is, they kick out nodes that their degree drops is lagging significantly out ofthe algorithm, as these nodes can create trouble for other nodes in their vicinity.Formally, they divide time into phases of Θ(log log ∆ + log 1 /ε ) rounds and require that by theend of phase k , each node has degree at most ∆ / k . At the end of each phase, each node that hasa degree higher than the allowed threshold is kicked out . The algorithm is run for log ∆ phases.From Theorem 2.1, we can see that the probability that a node that has survived up to phase i − kicked out in phase i is at most 2 − Θ(log log ∆+log 1 /ε ) = ε log ∆ . Hence, the probability that agiven node v gets kicked out in one of the log ∆ phases is at most ε . This means, by the end ofΘ(log ∆ log log ∆ + log ∆ log 1 /ε ) rounds, with probability 1 − ε , node v is not kicked out and is thusremoved because of having degree 0. That is, it joined or has a neighbor in the MIS.This Θ(log ∆ log log ∆ + log ∆ log 1 /ε ) local complexity has an improved ∆-dependency (and theguarantee has some nice independence type of properties). However, as mentioned in Section 1.1, its ε -dependency is not desirable, due to the log ∆ factor. Note that this is exactly the reason that the shattering threshold in the result of Barenboim et al. [BEPSv3] is O (log ∆) rounds. Here we present a very simple and clean algorithm that guarantees for each node v that after O (log ∆ + log 1 /ε ) rounds, with probability at least 1 − ε , node v has terminated and it knowswhether it is in the (computed) MIS or it has a neighbor in the (computed) MIS. The Intuition : Recall that the difficulty in locally analyzing Luby’s algorithm was the fact thatthe degree-dropping progresses of a node can be delayed by those of its neighbors, which in turn canbe delayed by their own neighbors, and so on (up to log ∆ hops). To bypass this issue, the algorithmpresented here tries to completely disentangling the “progress” of node v from that of nodes that arefar away, say those at distance above 3.The intuitive base of the algorithm is as follows. There are two scenarios in which a node v hasa good chance of being removed: either (1) v is trying to join MIS and it does not have too manycompeting neighbors, in which case v has a shot at joining MIS, or (2) a large enough number ofneighbors of v are trying to join MIS and each of them does not have too much competition, in whichcase it is likely that one of these neighbors of v joins the MIS and thus v gets removed. These twocases also depend only on v ’s 2-neighborhood. Our key idea is to create an essentially deterministic dynamic which has these two scenarios as its (more) stable points and makes each node v spend asignificant amount of time in these two scenarios, unless it has been removed already. The Algorithm : In each round t , each node v has a desire-level p t ( v ) for joining MIS, whichinitially is set to p ( v ) = 1 /
2. We call the total sum of the desire-levels of neighbors of v it’s effective-degree d t ( v ), i.e., d t ( v ) = P u ∈ N ( v ) p t ( u ). The desire-levels change over time as follows: p t +1 ( v ) = ( p t ( v ) / , if d t ( v ) ≥ { p t ( v ) , / } , if d t ( v ) < . The desire-levels are used as follows: In each round, node v gets marked with probability p t ( v )and if no neighbor of v is marked, v joins the MIS and gets removed along with its neighbors .6gain, each round of the algorithm can be implemented in 2 communication rounds on G , onefor exchanging the desire-levels and the marks, and the other for informing neighbors of newly joinedMIS nodes. Ignoring this 2 factor, in the sequel, each round means a round of the algorithm. The Analysis : The algorithm is clearly correct meaning that the set of nodes that join the MIS isindeed an independent set and the algorithm terminates at a node only if the node is either in MISor adjacent to a node in MIS. We next argue that each node v is likely to terminate quickly. Theorem 3.1.
For each node v , the probability that v has not made its decision within the first β (log deg + log 1 /ε ) rounds, for a large enough constant β and where deg denotes v ’s degree at thestart of the algorithm, is at most ε . Furthermore, this holds even if the outcome of the coin tossesoutside N +2 ( v ) are determined adversarially. Let us say that a node u is low-degree if d t ( u ) <
2, and high-degree otherwise. Considering theintuition discussed above, we define two types of golden rounds for a node v : (1) rounds in which d t ( v ) < p t ( v ) = 1 /
2, (2) rounds in which d v ( t ) ≥ d t ( v ) /
10 of it is contributedby low-degree neighbors. These are called golden rounds because, as we will see, in the first type, v has a constant chance of joining MIS and in the second type there is a constant chance that one ofthose low-degree neighbors of v joins the MIS and thus v gets removed. For the sake of analysis, letus imagine that node v keeps track of the number of golden rounds of each type it has been in. Lemma 3.2.
By the end of round β (log deg + log 1 /ε ) , either v has joined, or has a neighbor in, the(computed) MIS, or at least one of its golden round counts reached deg + log 1 /ε ) .Proof. We focus only on the first β (log deg + log 1 /ε ) rounds. Let g and g respectively be thenumber of golden rounds of types 1 and 2 for v , during this period. We assume that by the end ofround β (log deg + log 1 /ε ), node v is not removed and g ≤ deg + log 1 /ε ), and we concludethat, then it must have been the case that g > deg + log 1 /ε ).Let h be the number of rounds during which d t ( v ) ≥
2. Notice that the changes in p t ( v ) aregoverned by the condition d t ( v ) ≥ d t ( v ) ≥ p t ( v ) decreases by a 2 factor. Since the number of 2 factor increases in p t ( v ) can be at most equalto the number of 2 factor decreases in it, we get that there are at least β (log deg + log 1 /ε ) − h rounds in which p t ( v ) = 1 /
2. Now out of these rounds, at most h of them can be when d t ( v ) ≥ g ≥ β (log deg + log 1 /ε ) − h . As we have assumed g ≤ deg + log 1 /ε ), we get that β (log deg + log 1 /ε ) − h ≤ deg + log 1 /ε ). Since β ≥ h ≥ deg + log 1 /ε ).Let us consider the changes in the effective-degree d t ( v ) of v over time. If d t ( v ) ≥ d t +1 ( v ) ≤ d v ( t ) + 12 910 d v ( t ) < d t ( v ) . There are g golden rounds of type-2. Except for these, whenever d t ( v ) ≥
1, the effective-degree d t ( v ) shrinks by at least a 2 / / × <
1. Thus,ignoring the total of at most 3 g rounds lost due to type-2 golden rounds and their cancellation There is a version of Luby’s algorithm which also uses a similar marking process. However, at each round, letting deg ( v ) denote the number of the neighbors of v remaining at that time, Luby’s sets the marking probability of eachnode v to be deg ( v )+1 , which by the way is the same as the probability of v being a local minima in the variant describedin Section 2. Notice that this is a very strict fixing of the marking probability, whereas in our algorithm, we changethe probability dynamically/flexibly over time, trying to push towards the two desirable scenarios mentioned in theintuition, and in fact, this simple dynamic is the key ingredient of the new algorithm. d t ( v ) ≥ / . Thiscannot (continue to) happen more than log / deg often as that would lead the effective degree toexit the d t ( v ) ≥ d t ( v ) ≥ / deg + 3 g .That is, h ≤ log / deg + 3 g . Since h ≥ deg + log 1 /ε ), we get g > deg + log 1 /ε ). Lemma 3.3.
In each type-1 golden round, with probability at least / , v joins the MIS. Moreover,in each type-2 golden round, with probability at least / , a neighbor of v joins the MIS. Hence,the probability that v has not been removed (due to joining or having a neighbor in MIS) during thefirst β (log deg + log 1 /ε ) rounds is at most ε . These statements hold even if the coin tosses outside N +2 ( v ) are determined adversarially.Proof. In each type-1 golden round, node v gets marked with probability 1 /
2. The probability thatno neighbor of v is marked is Q u ∈ N ( v ) (1 − p t ( u )) ≥ − P u ∈ N ( v ) p t ( v ) = 4 − d t ( v ) > − = 1 /
16. Hence, v joins the MIS with probability at least 1 / > / L of low-degree neighborsof v one by one and expose their randomness until we reach a node that is marked. We will find amarked node with probability at least1 − Y u ∈ L (1 − p u ( t )) ≥ − e − P u ∈ L p u ( t ) ≥ − e − d t ( v ) / ≥ − e − / > . . When we reach the first low-degree neighbor u that is marked, the probability that no neighborof u gets marked is at least Q w ∈ N ( u ) (1 − p t ( w )) ≥ − P w ∈ N ( u ) p t ( w ) ≥ − d t ( u ) > /
16. Hence, withprobability at least 0 . /
16 = 1 / v joins the MIS.We now know that in each golden round, v gets removed with probability at least 1 / v does not get removed is at most (1 − / deg +log 1 /ε ) ≤ ε/ deg ≤ ε . In this section, we explain how combining the algorithm of the previous section with some knowntechniques leads to a randomized MIS algorithm with a high probability global complexity of O (log ∆) + 2 O ( √ log log n ) rounds.As explained in Section 1.2, the starting point is to run the algorithm of the previous section forΘ(log ∆) rounds. Thanks to the local complexity of this base algorithm, as we will show, we reachthe shattering threshold after O (log ∆) rounds. The 2-hops randomness locality of Theorem 3.1, thefact that it only relies on the randomness bits within 2-hops neighborhood, plays a vital role inestablishing this shattering phenomena. The precise statement of the shattering property achievedis given in Theorem 4.2, but we first need to establish a helping lemma: Lemma 4.1.
Let c > be an arbitrary constant. For any -independent set of nodes S —that is, aset in which the pairwise distances are at least —the probability that all nodes of S remain undecidedafter Θ( c log ∆) rounds of the MIS algorithm of the previous section is at most ∆ − c | S | . Notice the switch to d t ( v ) ≥
2, instead of d t ( v ) >
1. We need to allow a small slack here, as done by switchingto threshold d t ( v ) ≥
2, in order to avoid the possible zigzag behaviors on the boundary. This is because, the aboveargument does not bound the number of 2-factor increases in d t ( v ) that start when d t ( v ) ∈ (1 / ,
1) but these wouldlead d t ( v ) to go above 1. This can continue to happen even for an unlimited time if d t ( v ) keeps zigzagging around 1(unless we give further arguments of the same flavor showing that this is not possible). However, for d t ( v ) to go/stayabove 2, it takes increases that start when d t ( v ) >
1, and the number of these is upper bounded to g . roof. We walk over the nodes of S one by one: when considering node v ∈ S , we know from thatTheorem 3.1 that the probability that v stays undecided after Θ( c log ∆) rounds is at most ∆ − c ,and more importantly, this only relies on the coin tosses within distance 2 of v . Because of the5-independence of set S , the coin tosses we rely on for different nodes of S are non-overlapping andhence, the probability that the whole set S stays undecided is at most ∆ − c | S | .From this lemma, we can get the following shattering guarantee. Since the proof is similar to thatof [BEPSv3, Lemma 3.3], or those of [Bec91, Main Lemma], [ARVX12, Lemma 4.6], and [LRY15,Theorem 3], we only provide a brief sketch: Lemma 4.2.
Let c be a large enough constant and B be the set of nodes remaining undecided after Θ( c log ∆) rounds of the MIS algorithm of the previous section on a graph G . Then, with probabilityat least − /n c , we have the following two properties:(P1) There is no ( G − ) -independent ( G − ) -connected subset S ⊆ B s.t. | S | ≥ log ∆ n . Here G x − denotes the graph where we put edges between each two nodes with G -distance at most x .(P2) All connected components of G [ B ] , that is the subgraph of G induced by nodes in B , have eachat most O (log ∆ n · ∆ ) nodes.Proof Sketch. Let H = G − \ G − , i.e., the result of removing G − edges from G − . For (P1), notethat the existence of any such set S would mean H [ B ] contains a (log ∆ n )-node tree subgraph. Thereare at most 4 log ∆ n different (log ∆ n )-node tree topologies and for each of them, less than n ∆ log ∆ n ways to embed it in H . For each of these trees, by Theorem 4.1, the probability that all of itsnodes stay is at most ∆ − c (log ∆ n ) . By a union bound over all trees, we conclude that with probability1 − n (4∆) log ∆ n ∆ − c (log ∆ n ) ≥ − /n c , no such such set S exists. For (P2), note that if G [ B ] has acomponent with more than Θ(log ∆ n · ∆ ) nodes, then we can find a set S violating (P1): greedilyadd nodes to the candidate S one-by-one, and each time discard all nodes within 4-hops of the newlyadded node, which are at most O (∆ ) many.From property (P2) of Theorem 4.2, it follows that running the deterministic MIS algorithmof Panconesi and Srinivasan [PS92], which works in 2 O (log n ′ ) rounds in graphs of size n ′ , in eachof the remaining components finishes our MIS problem in 2 O ( √ log ∆+log log n ) rounds. However, theappearance of the log ∆ in the exponent is undesirable, as we seek a complexity of O (log ∆) +2 O ( √ log log n ). To remedy this problem, we use an idea similar to [BEPSv3, Section 3.2], which triesto leverage the (P1) property.In a very rough sense, the (P1) property of Theorem 4.2 tells us that if we “contract nodes thatare closer than 5-hops” (this is to be made precise), the left over components would have size atmost log ∆ n , which would thus avoid the undesirable log ∆ term in the exponent. We will see that,while running the deterministic MIS algorithm, will be able to expand back these contractions andsolve their local problems. We next formalize this intuition.The finish-off algorithm is as follows: We consider each connected component C of the remainingnodes separately; the algorithm runs in parallel for all the components. First compute a (5 , h )-rulingset R C in each connected component C of the set B of the remaining nodes, for an h = Θ(log log n ).Recall that a (5 , h )-ruling set R C means each two nodes of R C have distance at least 5 while foreach node in C , there is at least one node in R C within its h -hops. This (5 , h )-ruling set R C can becomputed in O (log log n ) rounds using the algorithm of Schneider, Elkin and Wattenhofer [SEW13]. This is different than what Barenboim et al. did. They could afford to use the more standard ruling set algorithm,particularly computing a (5 ,
32 log ∆ + O (1))-ruling set for their purposes, because the fact that this 32 log ∆ ends upmultiplying the complexity of their finish-off phase did not change (the asymptotics of) their overall complexity. R C -nodes by letting each node v ∈ C join thecluster of the nearest R C -node, breaking ties arbitrarily by IDs. Then, contract each cluster to anew node. Thus, we get a new graph G ′ C on these new nodes, where in reality, each of these newnodes has radius h = O (log log n ) and thus, a communication round on G ′ C can be simulated by O ( h ) communication rounds on G .From (P1) of Theorem 4.1, we can infer that G ′ C has at most log ∆ n nodes, w.h.p., as follows:even though R C might be disconnected in G − , by greedily adding more nodes of C to it, one byone, we can make it connected in G − but still keep it 5-independent. We note that this is done onlyfor the analysis. See also [BEPSv3, Page 19, Steps 3 and 4] for a more precise description. Sinceby (P1) of Theorem 4.1, the end result should have size at most log ∆ n , with high probability, weconclude G ′ C has at most log ∆ n nodes, with high probability.We can now compute an MIS of C , via almost the standard deterministic way of using networkdecompositions. We run the network decomposition algorithm of Panconesi and Srinivasan [PS92] on G ′ C . This takes 2 O ( √ log log ∆ n ) rounds and gives G ′ C -clusters of radius at most 2 O ( √ log log ∆ n ) , coloredwith 2 O ( √ log log ∆ n ) colors such that adjacent clusters do not have the same color. We will walk overthe colors one by one and compute the MIS of the clusters of that color, given the solutions of theprevious colors. Each time, we can (mentally) expand each of these G ′ C clusters to all the C -nodesof the related cluster, which means these C -clusters have radius at most log log n · O ( √ log log ∆ n ) .While solving the problem of color- j clusters, we make a node in each of these clusters gather thewhole topology of its cluster and also the adjacent MIS nodes of the previous colors. Then, thiscluster-center solves the MIS problem locally, and reports it back. Since each cluster has radiuslog log n · O ( √ log log ∆ n ) , this takes log log n · O ( √ log log ∆ n ) rounds per color. Thus, over all the colors,the complexity becomes 2 O ( √ log log ∆ n ) · log log n · O ( √ log log ∆ n ) = 2 O ( √ log log n ) rounds. Including the O (log log n ) ruling-set computation rounds and the O (log ∆) pre-shattering rounds, this gives thepromised global complexity of O (log ∆) + 2 O ( √ log log n ) , hence proving Theorem 1.2. This paper presented an extremely simple randomized distributed MIS algorithm, which exhibitsmany interesting local characteristics, including a local complexity guarantee of each node v ter-minating in O (log deg ( v ) + log 1 /ε ) rounds, with probability at least 1 − ε . We also showed thatcombined with known techniques, this leads to an improved high probability global complexity of O (log ∆) + 2 O ( √ log log n ) rounds, and several other important implications, as described in Section 1.3.For open questions, the gap between the upper and lower bounds, which shows up when log ∆ = ω ( √ log n ), is perhaps the most interesting. We saw in (C2) of Section 1.3 that if the lower-boundis the one that should be improved, we need to go away from “tree local-views” topologies. An-other longstanding open problem is to find a poly(log n ) deterministic distributed MIS algorithm.Combined with the results of this paper, that can potentially get us to an O (log ∆) + poly(log log n )randomized algorithm. Acknowledgment : I thank Eli Gafni, Bernhard Haeupler, Stephan Holzer, Fabian Kuhn, NancyLynch, and Seth Pettie for valuable discussions. I am also grateful to Fabian Kuhn and NancyLynch for carefully reading the paper and many helpful comments. The point (C2) in Section 1.3was brought to my attention by Fabian Kuhn. The idea of highlighting the local complexity is rootedin conversations with Stephan Holzer and Nancy Lynch, and also in Eli Gafni’s serious insistence that the (true) complexity of a local problem should not depend on n . This is a paraphrased version of his comment during a lecture on Linial’s Ω(log ∗ n ) lower bound, in the Fall 2014Distributed Graph Algorithms (DGA) course at MIT. eferences [ABI86] Noga Alon, L´aszl´o Babai, and Alon Itai. A fast and simple randomized parallel algorithmfor the maximal independent set problem. Journal of algorithms , 7(4):567–583, 1986.[ARVX12] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computa-tion algorithms. In
Pro. of ACM-SIAM Symp. on Disc. Alg. (SODA) , pages 1132–1139,2012.[BE10] Leonid Barenboim and Michael Elkin. Sublogarithmic distributed mis algorithm forsparse graphs using nash-williams decomposition.
Distributed Computing , 22(5-6):363–379, 2010.[Bec91] J´ozsef Beck. An algorithmic approach to the lov´asz local lemma. I.
Random Structures& Algorithms , 2(4):343–365, 1991.[BEPSv3] Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The localityof distributed symmetry breaking. In
Foundations of Computer Science (FOCS) 2012 ,pages 321–330. IEEE, 2012, also coRR abs/1202.1983v3.[BKP14] Tushar Bisht, Kishore Kothapalli, and Sriram Pemmaraju. Brief announcement: Super-fast t-ruling sets. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC) ,pages 379–381. ACM, 2014.[CL01] Fan Chung and Linyuan Lu. The diameter of random sparse graphs.
Advances in AppliedMath , 26(4):257–279, 2001.[Coo83] Stephen A Cook. An overview of computational complexity.
Communications of theACM , 26(6):400–408, 1983.[CPS14] Kai-Min Chung, Seth Pettie, and Hsin-Hao Su. Distributed algorithms for the lov´aszlocal lemma and graph coloring. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp.(PODC) , pages 134–143. ACM, 2014.[HPP +
15] James Hegeman, Gopal Pandurangan, Sriram Pemmaraju, Viveck Sardeshmuck, andMichele Scquizatto. Toward optimal bounds in the congested clique: Graph connectivityand MST. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC) , page toappear, 2015.[KMWv1] Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. What cannot be computedlocally! In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC) , pages 300–309.ACM, 2004, also coRR abs/1011.5470v1.[Kuh15] Fabian Kuhn. Personal Communication, 06-15-2015.[KW84] Richard M Karp and Avi Wigderson. A fast parallel algorithm for the maximal indepen-dent set problem. In
Proc. of the Symp. on Theory of Comp. (STOC) , pages 266–272.ACM, 1984.[Len13] Christoph Lenzen. Optimal deterministic routing and sorting on the congested clique.In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC) , pages 42–50, 2013.[Lin92] Nathan Linial. Locality in distributed graph algorithms.
SIAM Journal on Computing ,21(1):193–201, 1992. 11LRY15] Reut Levi, Ronitt Rubinfeld, and Anak Yodpinyanee. Local computation algorithms forgraphs of non-constant degrees. preprint arXiv:1502.04022 , 2015.[Lub85] Michael Luby. A simple parallel algorithm for the maximal independent set problem.In
Proc. of the Symp. on Theory of Comp. (STOC) , pages 1–10. ACM, 1985.[LW11] Christoph Lenzen and Roger Wattenhofer. MIS on Trees. In the Proc. of the Int’l Symp.on Princ. of Dist. Comp. (PODC) , pages 41–48. ACM, 2011.[Lyn96] Nancy A Lynch.
Distributed algorithms . Morgan Kaufmann, 1996.[MR10] Rajeev Motwani and Prabhakar Raghavan.
Randomized algorithms . Chapman &Hall/CRC, 2010.[Pel00] David Peleg.
Distributed Computing: A Locality-sensitive Approach . Society for Indus-trial and Applied Mathematics, Philadelphia, PA, USA, 2000.[PR07] Michal Parnas and Dana Ron. Approximating the minimum vertex cover in sublin-ear time and a connection to distributed algorithms.
Theoretical Computer Science ,381(1):183–196, 2007.[PS92] Alessandro Panconesi and Aravind Srinivasan. Improved distributed algorithms forcoloring and network decomposition problems. In
Proc. of the Symp. on Theory ofComp. (STOC) , pages 581–592. ACM, 1992.[RTVX11] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algo-rithms. In
Symposium on Innovations in Computer Science , pages 223–238, 2011.[SEW13] Johannes Schneider, Michael Elkin, and Roger Wattenhofer. Symmetry breaking de-pending on the chromatic number or the neighborhood growth.
Theoretical ComputerScience , 509:40–50, 2013.[Val82] Leslie G Valiant. Parallel computation. In
Proc. 7th IBM Symposium on MathematicalFoundations of Computer Science , 1982.[YRSDZ10] M´etivier Yves, John Michael Robson, Nasser Saheb-Djahromi, and Akka Zemmari. Anoptimal bit complexity randomized distributed mis algorithm. In
Structural Informationand Communication Complexity , pages 323–337. Springer, 2010.
A Simplified Global Analysis of Luby’s, due to Yves et al.
We here explain (a slightly paraphrased version of) the clever approach of Yves et al. [YRSDZ10]for bounding Luby’s global time complexity:
Lemma A.1.
Let G [ V t ] be the graph induced by nodes that are alive in round t , and let m t denotethe number of edges of G [ V t ] . For each round t , we have E [ m t +1 ] ≤ m t , where the expectation is onthe randomness of round t .Proof. Consider an edge e = ( u, v ) that is alive at the start of a round t , i.e., that is in G [ V t ]. Notethat edge e will not be in G [ V t +1 ], in which case we say e died, if in the random numbers drawn inround t , there is a node w that is adjacent to v or u (or both) and w has the strict local minima ofits own neighborhood. In this case, we say node w killed edge e .12ote that the probability that w kills e is d ( w )+1 , where d ( w ) denotes the degree of w . Thedifficulty comes when we want to compute the probability that there exists a w that kills e . Thisis mainly because, the events of different w killing e are not disjoint, and hence we cannot easilysum over them. Fortunately, there is a simple and elegant change in the definition, due to Yves etal. [YRSDZ10], which saves us from tedious calculations:Without loss of generality, suppose that w is adjacent to v . We say w strongly kills e from the sideof node v —and use notation w v → e to denote it—if w has the (strictly) minimum random numberin Γ w ∪ Γ v . Note that this is a stronger requirement and thanks to this definition, at most one node w can strongly kill e from the side of v . Thus, in a sense, we now have events that are disjoint whichmeans the probability that any of them happens is the summation of the probabilities of each ofthem happening. The only catch is, we might double count an edge dying, because it gets (strongly)killed from both endpoints, but that is easy to handle; we just lose a 2-factor. In the following, witha slight abuse of notation, by E we mean the alive edges, i.e., those of G [ V t ], and by Γ( v ), we meanthe neighbors of v in G [ V t ]. We have E [ Number of edges that die ] ≥ X e =( v,w ) ∈ E Pr[ e gets strongly killed ] ≥ X e =( v,w ) ∈ E (cid:18) X w ∈ Γ( v ) Pr[ w v → e ] + X w ′ ∈ Γ( u ) Pr[ w ′ u → e ] (cid:19) / ≥ X e =( v,w ) ∈ E (cid:18) X w ∈ Γ( v ) d ( w ) + d ( v ) + X w ′ ∈ Γ( u ) d ( w ) + d ( u ) (cid:19) / X v ∈ V X u ∈ Γ( v ) , e =( u,v ) (cid:18) X w ∈ Γ( v ) d ( w ) + d ( v ) + X w ′ ∈ Γ( u ) d ( w ) + d ( u ) (cid:19) / (cid:18) X v ∈ V X w ∈ Γ( v ) X u ∈ Γ( v ) , e =( u,v ) d ( w ) + d ( v )+ X u ∈ V X w ′ ∈ Γ( u ) X v ∈ Γ( u ) , e =( u,v ) d ( w ′ ) + d ( u ) (cid:19) / (cid:18) X v ∈ V X w ∈ Γ( v ) d ( v ) d ( w ) + d ( v ) + X u ∈ V X w ′ ∈ Γ( u ) d ( u ) d ( w ′ ) + d ( u ) (cid:19) / (cid:18) X v ∈ V X w ∈ Γ( v ) d ( v ) d ( w ) + d ( v ) + X v ∈ V X w ∈ Γ( v ) d ( w ) d ( w ) + d ( v ) (cid:19) / (cid:18) X v ∈ V X w ∈ Γ( v ) (cid:19) / m t / . It follows from Lemma A.1 that the expected number of the edges that are alive after 4 log n rounds is at most n / n < n . Therefore, using Markov’s inequality, we conclude that the probabilitythat there is at least 1 edge that is left alive is at most n . Hence, with probability at least 1 − n ,all nodes have terminated by the end of round 4 log nn