Breathe before Speaking: Efficient Information Dissemination Despite Noisy, Limited and Anonymous Communication
aa r X i v : . [ c s . D C ] J un Breathe before Speaking:Efficient Information Disseminationdespite Noisy, Limited and Anonymous Communication
Ofer Feinerman ∗ Bernhard Haeupler † Amos Korman ‡ Abstract
Distributed computing models typically assume reliable communication between processors. Whilesuch assumptions often hold for engineered networks, e.g., due to underlying error correction protocols,their relevance to biological systems, wherein messages are often distorted before reaching their destina-tion, is quite limited. In this study we take a first step towards reducing this gap by rigorously analyzinga model of communication in large anonymous populations composed of simple agents which interactthrough short and highly unreliable messages.We focus on the broadcast problem and the majority-consensus problem. Both are fundamentalinformation dissemination problems in distributed computing, in which the goal of agents is to con-verge to some prescribed desired opinion. We initiate the study of these problems in the presence ofcommunication noise. Our model for communication is extremely weak and follows the push gossipcommunication paradigm: In each round each agent that wishes to send information delivers a messageto a random anonymous agent. This communication is further restricted to contain only one bit (essen-tially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each messagesent is flipped independently with probability / − ǫ , for some small ǫ > .Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve thenoisy broadcast and the noisy majority-consensus problems efficiently. Our protocols run in O (log n/ǫ ) rounds and use O ( n log n/ǫ ) messages/bits in total, where n is the number of agents. These bounds areasymptotically optimal and, in fact, are as fast and message efficient as if each agent would have beensimultaneously informed directly by an agent that knows the prescribed desired opinion. Our efficient,robust, and simple algorithms suggest balancing between silence and transmission, synchronization,and majority-based decisions as important ingredients towards understanding collective communicationschemes in anonymous and noisy populations. ∗ The Shlomo and Michla Tomarin Career Development Chair, The Weizmann Institute of Science, Rehovot, Israel. E-mail: [email protected] . Supported in part by the Clore Foundation, the Israel Science Foundation (FIRST grant no.1694/10) and the Minerva Foundation. † Carnegie Mellon University. E-mail: [email protected] . Supported in part by the NSF grant XXXX. ‡ Contact author. CNRS and Univ. Paris Diderot, Paris, 75013, France. E-mail: [email protected].
Supported in part by the ANR project DISPLEXITY, and by the INRIA project GANG. This work has received funding from theEuropean Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreementNo 648032).
Introduction
Information theory originated as a search for methods to manage communication noise in engineered sys-tems [57]. In many ways, this search has reached its goals. The existence of coding methods that reduceerror rates to practically zero were proven to exist [57]. Not less important, such codes have been realizedin a myriad of real-world systems [51]. In other words, given a large enough bandwidth, one can encode amessage with a large number of error correcting bits in a way that makes communication noise essentially anon-issue. It is perhaps for this reason that fault-tolerance studies in distributed computing have somewhatneglected the issue of noise in communication. Indeed, such studies focus either on weak faults such as node crashes and message failures , or on very strong faults modeled as adversarial (
Byzantine ) interventions, butmessages that are transmitted from one processor are, typically, assumed to reach their destination withoutdistortion.In contrast, communication in the natural world is inherently noisy. Biology, for one, is replete withcommunicating ensembles on all levels of organization: from molecules (e.g., the immune complement sys-tem [15]), and cells (e.g., bacterial populations [8]) to societies (e.g., a superorganism of social insects [61]).Whereas it is unrealistic to assume adversarial interventions, biological signals are extremely vulnerable torandom distortion as they are being generated (e.g., probabilistic vesicle release in neuronal synapses [3]),transmitted over noisy media (e.g., acoustic communication in noisy environments [14]) and received (e.g.,non-reliable measurements taken by immune cells [29].) Nevertheless, many studies show that, in practice,biological ensembles function reliably despite communication noise [26, 55].How biological systems overcome communication noise is a very basic and intriguing question. Indeed,for systems composed of simple and restricted individuals, as is often the case in biology, it may not bereasonable to assume sophisticated error-correcting at the level of an individual channel. Furthermore,when message size is highly restricted, redundancy drastically reduces the available alphabet and hencecould not be used extensively. On the other hand, with only little redundancy, a random fault in the contentof a transmitted message may lead to the reception of a meaningful message that is inconsistent with theoriginal one [45].Our work is a first attempt to rigorously study the impact of communication noise on performing dis-tributed information dissemination tasks . We consider a basic and simple model of interaction betweenagents. In the absence of noise in communication, the information dissemination problems discussed hereare well understood, and in particular, the broadcast problem can be trivially solved. It turns out, however,that adding noise to the communication, even in a very simple form (e.g., noise is chosen from some givensimple distribution and is independent between messages), significantly complicates the situation. Indeed,our main efforts in this paper are devoted to understand the difficulties incurred by adding the noise.At this point, we would like to stress that although our model is inspired by biological systems, we donot claim that it fully represents any particular biological system. Rather, the model we consider is highlyabstract, aiming to capture a fundamental phenomena that (very loosely) relates to many biological systems.We believe, however, that the results of this preliminary paper can be useful for further research, that willfocus on more concrete biological settings. Network information theory [32] discusses the problem of disseminating information from one or more sources to a largenumber of recipients over noisy information channels. The settings there are, however, different from those that interest us as theyare non-distributed in nature and allow for complex coding schemes that may be computationally complex for simple agents [44]. .2 Context and related work Our paper falls within the scope of natural algorithms, a recent attempt to investigate biological phenomenafrom an algorithmic perspective [1, 12, 17, 27, 28, 46]. Within this framework, many works in the computerscience discipline have studied different computational aspects of abstract systems composed of simple andrestricted individuals. This includes, in particular, the study of population protocols [4, 5, 7, 10, 47], whichconsiders individuals with constant memory size interacting in pairs (using constant size messages) in acommunication pattern which is either uniformly at random or adversarial, and the beeping model [1, 2, 25],which assumes a fixed network with extremely restricted communication. However, despite interestingresults obtained in such models, the understanding of their fault-tolerance aspects is still lacking [5, 10].Here, we study basic distributed tasks in a model that includes highly restricted and noisy communication.
Broadcast and majority-consensus problems.
Disseminating information to all the nodes of a networkis one of the most fundamental communication primitives. In particular, the broadcast problem, wherea single piece of information initially residing at some source node is to be disseminated, and variantsof it have received a lot of attention in the literature, see, e.g., [16, 19, 23, 30, 33–37, 39, 43]. Much of thisresearch was devoted to bounding measures such as the number of rounds, and the total number of messages.Fault tolerant broadcast algorithms have also been studied extensively, especially in complete networks andin synchronous environments, where the focus has been on weak types of failures such as (probabilistic)message failures and initial node crashes. Essentially, it has been shown that there exist broadcast protocolsthat can overcome such faults with a relatively little penalty [21, 23, 24, 35, 38, 39, 43, 60].In the majority-consensus problem processors are required to agree on a common output value whichis the majority initial input value [6, 9]. While we look at a generalized version of this problem whereonly a subset A may hold an opinion initially, most previous works considered the case that all nodeshave an initial opinion. Furthermore, similarly to this current work, many previous papers also consideredclique networks, where agents contact other agents uniformly at random. For example, the task of majority-consensus was studied in a clique network by Angluin et al. [6]. The authors therein gave an algorithmthat uses only three states and converges in O (log n ) rounds. That algorithm is robust under a very smallfraction of agents being Byzantine, but is not robust under communication noise. We note that for ourpurposes, we could not use variants of the algorithm in [6] because it inherently uses three symbols in thecommunication, while we are restricted to only two symbols (a single opinion). On the other hand, similarlyto the method we use in Stage II of our algorithm, several other papers have solved the majority-consensusbased on repeatedly sampling the opinions of few other agents and re-setting the opinion of the observingagent according to the majority of these samples [11, 18, 22]. For example, Doerr et al. [22] considered thealgorithm where each agent repeatedly samples the opinions of two other agents uniformly at random thentaking the majority over its own and the two sampled opinions (three opinions in total). They show that thisalgorithm converges with high probability to the majority initial opinion in O (log n ) rounds, provided thatat least a / p log n/n ) fraction of the agents agree initially.It is important to stress that in the theoretical distributed computing discipline, none of the works onbroadcast and consensus related problems have considered noise in the communication. Related work in engineering and physics.
Broadcast related problems were studied in other contexts aswell, often with settings where communication noise is inherent. Engineers have studied the related problemof sensor network consensus formation in the presence of communication noise and have demonstrated, forexample, tradeoffs between consensus quality and running time [42]. Physicists have studied the spreadingof epidemics [48] and the formation of consensus around a zealot in voter models [49, 50] within prob-2bilistic settings that include communication noise. These physically inspired studies often assume verysimple algorithms and analyze their performance - this is different from a computer science approach whichfocuses on identifying the most efficient algorithms. Indeed, broadcast within a noisy voter model setting isexpected to yield long convergence times, polynomial in the number of agents.
Examples in biology.
In the biological world, broadcast is a common phenomenon which allows, forexample, a single receptor to activate an entire cell [59], a small number of cells to trigger large populationresponses [26], or a small number of vigilant individuals to alert their herd [56]. There have been severaldirect experimental demonstrations of reliable broadcast using unreliable messaging in biological systems.Examples include knowledgeable ants informing their nestmates regarding available food [55] and precisetemporal codes achieved by coordinated neuronal populations [40]. Such examples serve as motivation fora more thorough theoretical understanding of how rumors spread through groups of simple individuals thatcommunicate by noisy messages. Majority-consensus problems have also been shown to be relevant forseveral biological systems: Ants choosing between two alternative nesting sites and reach consensus on anest that attracts a larger number of scouts [31] and a group of fish that reach consensus around the largergroup of leaders [58] being two examples.
As a first step into the study of noisy information dissemination, we study a very simple scenario in whichthere are only two possible states (or opinions ) for the environment, namely, and , one of which is the correct opinion, denoted by B . We study two information dissemination problems both of which consist of n anonymous agents . The noisy broadcast problem.
In this problem we start the execution with one designated agent, calledthe source (representing the environment) that holds the correct opinion B , while all other n − agentsinitially have no information regarding B . Agents can propagate information and update their knowledge byusing (noisy) interactions as specified below. The goal is that eventually, with high probability , all agentsadopt B as their final opinion. Throughout we denote with high probability any probability of at least − /n c , for some sufficiently large constant c > . The noisy majority-consensus problem.
In this problem we consider that initially we have a subset A of agents, each of which has an opinion in { , } (all other agents do not have an opinion), where B is themajority opinion among the agents in A . The problem is parameterized by the extent to which B is morecommon. That is, the majority-bias of A is defined as ( A B − A ¯ B ) / | A | , where A i is the number of agentsin the initial opinionated group, A , with opinion i , for i ∈ { , } . As in the noisy broadcast problem, thegoal of the agents is to guarantee that with high probability, at the end of the execution, all agents hold theopinion B . Flip model of communication
We assume a synchronous setting, in which all agents start the execution simultaneously and communicationtakes place in discrete rounds [53]. As mentioned, agents can use their (noisy) interactions to inform and3pdate their opinion. In each round, each agent can choose to wait , i.e, do nothing, or to send a message.The interaction pattern we study follows the standard push gossip model [19, 43, 54], where in eachround each agent that chooses to send a message sends this message to another agent, chosen uniformly atrandom, without sender or receiver learning about each other’s identity. If an agent receives several messagesat the same round, it can only accept one of them (chosen uniformly at random), and all other messages aredropped. The message size is extremely restricted, specifically, each message sent consists of a single bitessentially encoding an opinion. Let ǫ > be a parameter of the Flip model. All messages are subject tonoise, specifically, for each message sent by an agent, upon receiving it, the bit in the message is flipped independently with probability at most / − ǫ . Each agent is equipped with a clock that enables it to count rounds. In the standard model, the clock atan agent is initialized to 0 when the agent is activated (an agent is activated when it receives a messagefor the first time). We also consider the fully-synchronous setting in which all agents start the executionsimultaneously at the same time, or in other words, they all use the same global clock , initialized to 0 at thebeginning of the execution.
We view the two possible opinions { , } as abstract symmetric opinions that cannot affect any decisionmade by individual agents, except for which message to transmit . Accordingly, we consider only symmetric algorithms, in which the choices of individuals of whether or not to send a message at a given time areoblivious of the value of B . That is, when fixing all random bits involved in an execution, the message-pattern (i.e., who sends who and at what time) in symmetric algorithms is the same regardless of whether B equals 1 or 0. The restriction of the symmetric noisy broadcast problem (or the majority-consensus problem) to two partiesis, in some sense, classical for the area of information theory. Here, a source agent a wishes to deliver its bitopinion B to the second agent b through a binary symmetric channel with crossover probability p = 1 / − ǫ .The seminal result by Shannon [57] implies that using the channel Θ(1 /ǫ ) times is both necessary andsufficient, for allowing b to possess the opinion B with sufficiently high constant probability. This imme-diately implies a Θ(1 /ǫ ) bound for the number of rounds needed for the same confidence guarantee inthe two-party noisy broadcast problem, since each message here contains precisely one bit. When it comesto a population of n agents, the goal is to have each agent possess the opinion B with high probability (atleast − /n c ). In this case, each agent would individually need to obtain Ω( ǫ log n ) messages, even ifall messages would come directly from the source node. These bounds immediately imply a lower boundof Ω( ǫ n log n ) on the total bit complexity and hence also on the total number of messages sent. Moreover,since we assume that an agent can handle at most one message at a time, we get that Ω( ǫ log n ) is also alower bound on the number of rounds. All these bounds apply even if all messages would be as informative One could view this trait as a consequence of a symmetry of the world, in which an agent can decide if two opinions are thesame or not but has no access to their actual values. For example, a flock of birds following a source (e.g., a bird that has spotted apredator) that travels either north or south can do this even in an environment where there is complete symmetry between these twodirections. The only demand is that the escape direction of all birds agree with that of the source.
4s those originated directly by the source agent. Hence, they apply in much stronger models of communi-cation, such as ones that allow an agent to send messages to multiple destinations at the same round, andones that consider non-anonymous populations, where an agent could direct a message to a desired des-tination. Note that the same arguments hold also for the noisy majority-consensus problem if the initialsubset A of agents is small. On the other hand, without interacting with other agents and simply waiting toreceive sufficiently many samples from the source agent, the noisy broadcast problem could only be solvedin O ( ǫ n log n ) rounds. Our main result, presented in Section 2, considers the fully-synchronous setting, where it is assumed thatagents start their operation simultaneously at the same time. For this setting we present a randomizedsymmetric algorithm that solves the noisy broadcast problem in O ( ǫ log n ) rounds and uses a total of O ( ǫ n log n ) messages (or bits). These bounds are both asymptotically optimal and, in fact, are as fast andmessage efficient as if each agent would have been simultaneously informed by the source directly. We alsoshow that the same asymptotically tight bounds (for the running time and message complexity) hold also forsolving the noisy majority-consensus problem with any initial subset A of agents of size | A | = Ω( ǫ log n ) and whose majority-bias is Ω( p log n/ | A | ) .In Section 3 we show how to remove the global-clock assumption. This modification applies to bothalgorithms and comes at an additive cost of O (log n ) to the running time, while the message complexityremains the same.Our results imply that even in severely restricted, stochastic and noisy settings one can still solve thenoisy broadcast and the noisy majority-consensus problems efficiently by applying simple protocols. In-deed, our basic algorithms employ very simple rules that can be implemented using restricted memory,specifically, using O (log log n + log(1 /ǫ )) memory bits. Essentially, each agent has some waiting period(in which it does not send any message), and after which it starts sending its current opinion at each rounduntil the protocol terminates. Furthermore, its opinion is occasionally updated following a majority-typeprocedure based on its recently received messages. Before we describe our algorithms, let us first highlight some of the complex features of the noisy broadcastproblem (the same difficulties arise also in the noisy majority-consensus problem). Consider an agent a thatreceives its first message. This agent now has several options for its actions. One option is to keep silent(wait) until receiving another message. This strategy would result in an algorithm that requires huge amountof time. Indeed, the first agent that hears two messages must hear both of them from the source (since allother agents are silent), and this would require waiting for Ω( √ n ) rounds, by the birthday paradox. Anotherpossible action for such an agent is to immediately forward the message it just received to others. Thisstrategy would result in the typical agent hearing a very unreliable message for the first time. That is, thenumber of intermediate agents on the path between the source and the typical agent would be roughly log n .Now, each time the message passes from an agent to an agent, the probability of preserving the originalopinion drastically reduces. Specifically, it is not difficult to show that a message following a path of size c is correct with probability at most / ǫ ) c . This means that if ǫ is small, the probability that a typicalagent receives the correct opinion on the first message it hears is at most / /n . If this is the case withall agents, it seems, again, almost impossible to recover and reconstruct the correct opinion B .5nother difficulty in the strategy of immediately forwarding messages, is that the execution seems to bedependent on the quality of the first messages to be received directly from the source, and these messagescan be corrupted with non-negligible probability. Indeed, in the beginning of the execution, the pattern ofmeeting looks like a tree, rooted at the source agent. Moreover, the collection of subtrees hanging downfrom the children of the root (the agents directly informed by the source agent) do not have the same size,as the subtrees hanging down from the first informed children of the root grow much faster and dominatethe population. Hence, the initial opinion of agents could not be more reliable than the initial opinions ofthe roots of the corresponding subtrees. At this point, with non-negligible probability, the majority of agentswould have obtained the wrong opinion, from which it seems again almost impossible to recover.To overcome these difficulties, we use a third option for the behavior of an agent, allowing it to waitfor a prescribed number of rounds before sending a message. For doing so, we rely on synchronization,which we use to balance the sizes of the aforementioned subtrees and, therefore, constrain the deteriorationof reliability. The analysis of our algorithms relies on an extensive use of Chernoff’s bounds. For completeness, weremind the reader of these equalities.Let X , · · · , X n be independent random variables taking values in { , } . Let X = P ni =1 X i denotetheir sum, and let E ( X ) denote the expected value of X . Then, for any < δ < , we have the followingbounds. Pr( X ≥ (1 + δ ) E ( X )) ≤ e − δ E ( X )3 (1) Pr( X ≤ (1 − δ ) E ( X )) ≤ e − δ E ( X )2 (2) Negatively-correlated random variables.
In some cases, the aforementioned Chernoff’s inequalitieshold also if the random variables are negatively associated. In particular, sampling from a larger set withoutreplacement leads to negatively associated random variables for which Chernoff’s bounds continue to hold.For this and related basic results on negative association see [20, 41]. Since we will only be dealing withBernoulli variables we can alternatively use a slightly weaker but simpler notion from [52] which definesrandom Bernoulli variables X , · · · , X n as negatively -correlated or simply negatively-correlated if forevery subset I ⊆ { , , · · · , k } , we have: Pr ^ i ∈ I X i = 1 ! ≤ Π i ∈ I Pr( X i = 1) , Pr ^ i ∈ I X i = 0 ! ≥ Π i ∈ I Pr( X i = 0) . Panconesi and Srinivasan showed in [52] that this condition holds when sampling without replacementand furthermore proved that Chernoff’s inequalities mentioned in Equations 1 and 2 continue to hold fornegatively-correlated Bernoulli variables. 6
Algorithms for the fully-synchronous setting
In this section we assume that all agents start the algorithm with their clocks set to zero. In Section 3 weshow how to remove this global-clock assumption at some additive cost in the running time.The interesting cases are when ǫ is a small constant, but we allow it to be much smaller. Specifically,let ǫ > /n / − η , for some arbitrarily small constant η > . We present symmetric and simple random-ized algorithms that solve the noisy broadcast and the majority-consensus problems. The running timesand message complexities of both algorithms are asymptotically optimal, that is, they both terminate after O ( ǫ log n ) rounds and use a total of O ( ǫ n log n ) messages.Although our algorithms are simple, their analysis is quite involved. Most of the technical ideas inthis paper are used for the analysis of our noisy broadcast algorithm, hence we focus on this algorithm.The algorithm consists of two stages. The first stage of the algorithm is intended to activate all agents (anagent is considered as activated upon receiving its first message), and to make sure that overall, the averageinitial opinion of activated agents has some non-negligible bias towards the correct opinion. Stage II of thealgorithm is meant to boost the bias using repeated samplings until consensus is reached. Our goal in the first stage of the algorithm is to quickly allow each agents to set an opinion, so that thefraction of correct agents is at least / p log n/n ) . Then the second stage will be employed to boostthis bias using more standard techniques of repeatedly taking majority. In order to spread the correct opinion B while controlling the deterioration of the average bias of informedagents towards B , the first idea we employ is to delay propagation of messages, and synchronize them, bygrouping the time slots into phases . That is, we propagate the information in layers, forming a tree, whoseroot is the source agent S (layer 0). To control the reliability deterioration of the messages, we synchronizethe phases so that all activated agents broadcast in a phase at the same time. In particular, in the first phase,called phase 0, only the source agent transmits messages (all non-source agents are waiting). Recall thatevery such message is correct with probability at least / ǫ . Phase 0 lasts for β s := Θ( ǫ log n ) rounds,and is meant to allow the source agent to directly inform sufficiently many agents, and guarantee that withhigh probability the bias towards B of the opinions that these agents have heard is bounded away fromzero, specifically, the bias is at least ǫ/ . Note that at this point, we are left with solving the noisy majority-consensus problem with an initial set A of agents of size Θ( ǫ log n ) whose majority-bias is Ω( p log n/ | A | ) .The general description of our algorithm in Stage I is as follows: any agent receiving a message in somephase i (also including the case i = 0 ) keeps silent (waits, and does not send messages) until phase i iscompleted and, at the end of the phase, it chooses uniformly at random an arbitrary message among themessages it has received, and sets its initial opinion as the value of this message. Only after phase i iscompleted, will such an agent send messages. That is, when the next phase i + 1 starts, each such agentwill start to send its initial opinion repeatedly in every round until the whole of Stage I is completed. Hence,phase i is responsible for passing information between all the already activated agents (these are the agentsin layers , , . . . i − ) to the newly activated agents in phase i (forming layer i ).Because of the noise in the messages, the quality of information that propagates between layers dete-riorates exponentially fast in ǫ . Specifically, if the fraction of correct agents at layer i is some / δ i ,7hen the expected fraction of correct messages reaching agents at layer i + 1 is / ǫδ i . To guaranteethat this controlled level of deterioration holds w.h.p., as well as to account for this already problematicphenomena, our phasing process makes sure that the number of agents informed in the next layer increasesquadratically faster than the deterioration factor. That is, the number of newly informed agents increases bya factor larger than /ǫ . Maintaining this property throughout all phases allows us to guarantee that when x agents are activated (where x is sufficiently large), then, w.h.p., the bias towards the correct opinion is Ω( p log n/x ) . In particular, this implies that when all n agents are activated, the bias towards the correctopinion is Ω( p log n/n ) . Choose parameters f, β, s = Θ(1 /ǫ ) such that f > c β > c s > c /ǫ , for sufficiently large constants c , c , c > . Let β s = s log n , and β f = f log n . In addition, let T = ⌊ log( n/ β s ) / log( β + 1) ⌋ . Notethat β s ( β + 1) T ≤ n/ and that T = O ( log n log(1 /ǫ ) ) .We group the rounds of Stage I into T + 2 phases, such that for each ≤ i ≤ T , phase i + 1 immediatelyfollows phase i . Phase 0 takes β s rounds, phase i , for ≤ i ≤ T , takes β rounds, and phase T + 1 takes β f rounds. Formally, letting [ x, y ) denote the time period from round x until round y − , we have: phase , β s ) , for ≤ i ≤ T, phase i = [ β s + ( i − β, β s + iβ ) , and, phase T + 1 = [ β s + T β, β s + T β + β f ) .At a given time, a non-source agent is called activated if it already heard a message by that time (thesource agent is always considered activated). A non-activated agent is called dormant . For an agent a , let t a denote the first time a was activated, and let i a be the integer i for which t a belongs to phase i . An agent a is at level i if i a = i . In particular, the source is of level 0. The rule of Stage I:
Consider an activated agent a of level i . Agent a waits until phase i + 1 startsbefore sending any message. During phase i it collects all messages it heard in the phase, chooses one ofthem uniformly at random, and sets its initial opinion B ( a ) to be the opinion it heard in that message.The agent then sends its initial opinion B ( a ) in each round during the phases i a + 1 , i a + 2 , · · · , T + 1 .(In other words, Agent a waits until phase i a is completed and then it starts sending its initial opinionrepeatedly in every round until the end of Stage I.) An agent is called initially correct if the message itheard for the first time is correct, i.e., if B ( a ) = B . Remark 2.1.
It may be the case that an agent activated in some phase i (especially for large i ) receivesseveral messages throughout that phase. We have chosen to let the agent set its initial opinion according to amessage chosen uniformly at random among these messages. For the purposes of this current section, wherea global clock is assumed, all proofs would have carried out in the same manner, had we chosen instead,to let the agent set its initial opinion according to the first message it received. The reason for choosing arandom message is to guarantee that the order in which the agent receives its messages during any phasedoes not influence the actions of this agent. This property will be more important in Section 3, which relaxesthe synchronization requirement. Note first, that in particular, in phase 0, the source S is the only agent sending any messages. Let X be the number of agents activated at phase . More generally, for i a non-negative integer define X i as therandom variable indicating the number of agents that were activated at some time before the end of phase i .Let Y i denote the random variable indicating the number of agents that were activated during phase i . Hence,we have: X i = P ij =0 Y j . Let Z i denote the number of initially correct agents among the Y i agents that wereactivated during phase i and let ǫ i be such that Z i = (1 / ǫ i ) Y i . We call ǫ i the bias of phase i .8 laim 2.2. By choosing s > c/ǫ for a large enough constant c , it is guaranteed that at the end of phase 0,w.h.p., we have β s / ≤ X ≤ β s activated agents whose bias towards the correct opinion B is at least ǫ/ ,that is, ǫ ≥ ǫ/ .Proof. Recall that Z denotes the number of initially correct agents among the X = Y agents that wereactivated during phase and let ǫ be such that Z = (1 / ǫ ) Y . Our goal is to show that ǫ ≥ ǫ/ .Recall that phase 0 lasts for β s = s log n rounds, and that until the phase is completed only the sourceagent S is sending messages. Hence, during phase 0, there are always at most β s activated agents, and inparticular, at least n/ dormant agents. Hence, each message sent during phase 0 has probability at least1/2 to activate an agent. The number of activated agents at the end of phase 0 is thus dominated by β s independent Bernoulli( / ) random variables and by Chernoff’s inequality, we can choose the parameter s (in the definition of β s ) to be a sufficiently large constant so that w.h.p., at the end of phase 0, we have atleast β s / activated agents, that is, X = Y ≥ β s / .Let us now focus on the random faults occurring in the messages sent during phase 0. Each of the Y activated agents chooses one message uniformly at random among the messages it heard (typically it onlyheard one message anyways). The opinion received by this chosen message (and, in fact, by any message)has probability at least / ǫ to be correct. Hence, the agent has probability at least / ǫ to be activatedwith the correct opinion B . It follows that the expected number of agents that were activated with thecorrect opinion during phase 0 is at least (1 / ǫ ) Y . In the terminology of Chernoff’s inequality (seeEquation 2), we have E ( X ) ≥ (1 / ǫ ) Y . By taking δ = ǫ/ , we get that (1 − δ ) E ( X ) > (1 / ǫ/ Y .According to Chernoff’s inequality, the probability that the expected number of agents that were activatedwith the correct opinion during phase 0 is less than this amount, is at most e − δ E ( X ) / = e − O ( ǫ Y ) . Since Y ≥ β s / s/
3) log n , then for sufficiently large s ≫ /ǫ it follows that this probability e − O ( ǫ Y ) ispolynomially small. In other words, w.h.p., the number Z of initially correct agents during phase 0 is atleast (1 / ǫ/ Y . This establishes ǫ ≥ ǫ/ and the proof of the claim.Observe that by Claim 2.2, phase 0 essentially reduces the noisy broadcast problem to an instance ofthe noisy majority-consensus problem, with an initial set of size X = Θ( β s ) = Θ( ǫ log n ) and majority-bias of at least ǫ/ p log n/ | X | ) . What we shall show is that in general, phases , , . . . i , where i ≤ T , take us to an instance of the noisy majority-consensus problem, with an initial set A i of size | A i | =Θ( ǫ i +2 log n ) and majority-bias of at least ǫ i +1 / p log n/ | A i | ) . For T = ⌊ log( n/ β s ) / log( β + 1) ⌋ this would lead to showing that w.h.p., after T phases, the number of activated agents is Ω( ǫ n ) and thefraction of initially correct agents is at least / p log n/ ( ǫ n )) . The last phase of the stage taking β f ≫ log n/ǫ rounds would then lead to the following lemma summarizing the performances of Stage I. Lemma 2.3.
Stage I takes O ( ǫ log n ) rounds. At the end of the stage the following event E holds w.h.p:1. All agents are activated.2. The fraction of initially correct agents is at least / p log n/n ) . The remainder of this subsection is devoted to the proof of Lemma 2.3. It is easy to verify that the numberof rounds in Stage I is β s + βT + β f = O ( ǫ log n ) . Our goal thus is to show that event E mentioned inthe lemma holds with high probability. The proof considers a sequence of events E , E , · · · E τ , for some τ = O (log n ) , where E τ = E . We will show that event E i occurs w.h.p., given E i − . This would implythat E occurs w.h.p., by repeatedly invoking the standard argument | Pr( E i +1 | E i ) − Pr( E i +1 ) | ≤ Pr( ¯ E i ) .9ecall that Claim 2.2 asserts that w.h.p., we have β s / ≤ X ≤ β s and ǫ ≥ ǫ/ . In what follows, weassume that this highly likely event holds (see the paragraph above). Analysis for phase i , where ≤ i ≤ T : It is easy to see that X i , the number of activated agents at theend of phase i is at most X i ≤ ( β + 1) i X = O (cid:0) ǫ i +2 log n (cid:1) . This follows trivially from the fact that X i = X i − + Y i , and from the fact that Y i ≤ βX i − (because for i ≥ , phase i is composed of β roundsand in each such round precisely X i − messages are being sent). The following claim states that w.h.p., thevalue of X i is, in fact, very close to ( β + 1) i X . Establishing this claim will enable us to show that up tophase T , the values Y i are increasing exponentially and that at the beginning of phase T we already have Ω( ǫ n ) activated agents. The proof of the following claim extensively uses concentration properties givenby Chernoff’s inequality: Claim 2.4.
W.h.p., for every i , ≤ i ≤ T , we have: ( β + 1) i X / ≤ X i ≤ ( β + 1) i X . Proof.
As mentioned, with probability 1, we have: X i ≤ ( β + 1) i X . (3)Hence, our goal is to prove the other part of the claim, namely, the lower bound ( β + 1) i X / ≤ X i . Thisstatement trivially holds for i = 0 . Hence, we shall prove the statement by induction on i , where the basisof the induction is the trivial case i = 0 . Fix an integer i ≥ and assume by induction that the claim holdsfor i − . Consider a round r in phase i (where ≤ r ≤ β ). Equation 3 implies that the number of dormantagents in round r − of phase i is always at least n − X i ≥ n − ( β + 1) i X . Therefore, the probabilitythat a given message sent in round r activates an agent is at least − ( β + 1) i X /n . Note that at round r of phase i (in fact, at any round of phase i ), precisely X i − messages are being sent. Letting A i,r denotethe number of agents that are activated in round r of phase i , we thus have that A i,r is dominated by X i − independent Bernoulli( − ( β + 1) i X /n ) variables with an expected value of: E ( A i,r ) ≥ (cid:0) − ( β + 1) i X /n (cid:1) X i − . (4)In particular, since i ≤ T , and since β s ( β + 1) T ≤ n/ , we have E ( A i,r ) ≥ X i − / . Furthermore, applyingChernoff’s inequality, for any δ > , we have: Pr ((1 − δ ) E ( A i,r ) ≤ A i,r ) ≥ − e − δ E ( Ai,r )2 = 1 − e − Ω( δ X i − ) . By the induction hypothesis, we get that X i − ≥ ( β + 1) i − X /
16 = Ω(( β + 1) i − log n ) , w.h.p. Taking δ = 1 / i , we thus get that: Pr (cid:0) (1 − / i ) E ( A i,r ) ≤ A i,r (cid:1) ≥ − e − Ω(( β +1) i − log n/ i ) . Taking β to be sufficiently large thus implies that, w.h.p., we have: (cid:0) − / i (cid:1) E ( A i,r ) ≤ A i,r . A unionbound over all rounds r in phase i then guarantees that, w.h.p: (cid:0) − / i (cid:1) β X r =1 E ( A i,r ) ≤ β X r =1 A i,r . Using the bound from Equation 4 and observing that Y i = P βr =1 A i,r , we get that w.h.p: (cid:0) − / i (cid:1) (cid:0) − ( β + 1) i X /n (cid:1) · βX i − ≤ Y i . (5)10ince X i = Y i + X i − , we get that, w.h.p: (cid:0) − / i (cid:1) (cid:0) − ( β + 1) i X /n ) (cid:1) · ( β + 1) X i − ≤ X i . Hence, ( β + 1) i X · Π ij =1 (cid:0) − / j (cid:1) Π ij =1 (1 − ( β + 1) j X /n ) ≤ X i . (6)Observe, Π ij =1 (cid:0) − / j (cid:1) = 2 log Π ij =1 (cid:16) − j (cid:17) = 2 P ij =1 log (cid:16) − j (cid:17) > − P ∞ j =1 12 j = 1 / . Also, Π ij =0 (cid:0) − ( β + 1) j X /n (cid:1) > − P ij =0 ( β +1) jX n = 2 − X n P ij =0 ( β +1) j > − X n ( β +1) i ≥ − s log nn ( β +1) i . Now, i < T , and T is chosen so that s ( β + 1) T log n ≤ n/ , hence, s ( β +1) i log nn < / , implying that: Π ij =0 (cid:0) − ( β + 1) j X /n (cid:1) > / . Finally, By Equation 6, we get: ( β + 1) i X / ≤ X i , which establishes the proof of Claim 2.4.Relying on the definition of T , the fact that X ≥ β s / holds w.h.p., and taking β = O (1 /ǫ ) suchthat β > s , we ensure that w.h.p., we have ( β + 1) T +1 X ≥ n/ . Hence, Claim 2.4 implies the followinglower bound on X T , the number of activated agents at the beginning of the last phase in Stage I. Corollary 2.5.
W.h.p., we have X T = Ω(( β + 1) T X ) = Ω( ǫ n ) . This also guarantees that setting f > c/ǫ for a large enough constant c suffices for the f log n roundsin phase T + 1 to activate all agents: Corollary 2.6.
W.h.p., at the end of Stage I, all agents are activated.Proof.
Recall that phase T + 1 consists of β f = f log n rounds, in which all X T agents that were activatedbefore the beginning of the phase are sending their initial opinion in each round of the phase. Accordingto Corollary 2.5 we have, w.h.p., that X T > c ′ ( ǫ n ) for some constant c ′ . Setting f > c/ǫ for a largeenough constant c guarantees that the number of messages sent out over the course of phase T + 1 is, w.h.p., β f X T > c ′ cn log n . Note that each agent has a probability of /n to be the recipient of any such messagewhich is further independent between the messages. The probability that an agent is not activated by thereceipt of any message after phase T + 1 is thus at most (1 − /n ) c ′ cn log n = n − Θ( c ′ c ) .11he next corollary gives a lower bound on the growth of Y i , the number of newly activated agents in phase i .This lower bound will be used for bounding the bias from below (see Claim 2.8). Note that the duration ofthe last phase, T + 1 , is taken to be longer than that of phases i = 1 . . . T to guarantee a large number ofnewly activated agents even in this last phase. Indeed, continuing with phases of duration β would activateall agents relatively early, but would also restrict the number of newly activated agents at later phases. Corollary 2.7.
W.h.p., for every phase i , where ≤ i ≤ T + 1 , we have Y i ≥ β i − log n .Proof. Note that Equation 5 in the proof of Claim 2.4 implies that for any integer ≤ i ≤ T , we have βX i − / ≤ Y i . Together with the lower bound on X i − given in Claim 2.4 (i.e., ( β + 1) i − X / ≤ X i − ),and taking sufficiently large β and s , we get that, w.h.p., β i − log n ≤ Y i , which establishes the claim for any i , such that ≤ i ≤ T . By definition of T , and the fact that (with probability 1) for i ≥ , X i ≤ ( β + 1) i X , we get X T ≤ n/ . Hence, Corollary 2.6 implies that, w.h.p., Y T +1 ≥ n/ ≥ β T log n .Recall that / ǫ i is the fraction of initially correct agents among the Y i agents that were activated inphase i , i.e., ǫ i is the bias toward B among these Y i agents. Corollary 2.7 will be useful for obtaining thefollowing claim. Claim 2.8.
W.h.p., for every phase i , where ≤ i ≤ T + 1 , we have ǫ i ≥ ǫ i +1 / .Proof. We prove the claim by induction on i . The basis of the induction is i = 0 , which has already beenestablished in Claim 2.2. Consider now phase i , where ≤ i ≤ T + 1 . By the induction hypothesis, we canassume that w.h.p. ǫ i − ≥ ǫ i / . Fix a configuration at the end of phase i − for which ǫ i − ≥ ǫ i / , and let φ = ǫ i − . Thus, the fraction of initially correct agents among the X i − activated agents in the beginning ofphase i is / φ ≥ / ǫ i / . For any of the newly activated agents a in phase i , the probability that theinitial opinion of a is correct is at least: (1 / φ ) · (1 / ǫ ) + (1 / − φ ) · (1 / − ǫ ) = 1 / ǫφ. By linearity of expectation, this equation implies that E ( Z i ) ≥ (1 / ǫφ ) Y i ≥ (1 / ǫ i +1 ) Y i . Taking δ = ǫ i +1 / gives (1 − δ ) E ( Z i ) > Y i (1 / ǫ i +1 / .For any given round j of phase i , let Y i,j denote the set of agents that received a messages in round j , andfurthermore, decided to set their initial opinion according the message received in that round. The randomvariables indicating which of the agents in Y i,j has the correct initial opinion are negatively-correlated sincethe corresponding samples are taken without replacement (see Section 1.7). Between different rounds of thephase, these random variables are furthermore independent. Hence, overall, the random variables indicatingwhich of the agents in Y i = ∪ j Y i,j has the correct initial opinion are negatively-correlated. This allows usto apply Chernoff’s inequality which together with the lower bound on Y i from Corollary 2.7 gives that:Pr [ Z i < Y i (1 / ǫ i +1 / ≤ e − δ E ( Z i ) / < e − δ Y i / = e − ǫ i +2 Y i / = 1 /e Ω( ǫ i β i − log n ) . Taking β > /ǫ to be sufficiently large therefore implies that, w.h.p., we have Z i ≥ Y i (1 / ǫ i +1 / , orin other words, ǫ i ≥ ǫ i +1 / .Claim 2.8 together with the definition of T and the fact that β > /ǫ imply that w.h.p., the fraction ofinitially correct agents at the end of Stage I is at least / ǫ T +2 / / p log n/n ) , completing the proof of Lemma 2.3. 12 .2 Stage II: Boosting the bias We have proved that, w.h.p., at the end of Stage I all agents are activated and the bias of correct agents isat least δ , where δ = Ω( p log n/n ) . Stage II is meant to gradually boost the bias towards the correctopinion, so that, w.h.p., it will equal 1 (that is, all agents are correct) at the end of the stage. For that purposewe use standard techniques of repeatedly taking majority, see, e.g., [11, 22]. We note however that oursetting is different than those used in previous papers, mainly because we assume noise in communication.The difficulties resulting from noise required us to come up with an analysis that uses somewhat differentarguments than the ones used in previous majority-based papers. Stage II is executed in k + 1 phases, where k = ⌈ log(1 /δ ) ⌉ = O (log n ) . Informally, phase i , for ≤ i ≤ k ,is associated with a parameter δ i , such that it is guaranteed w.h.p., that when the phase starts, the fraction ofcorrect agents is at least / δ i . (Note that a sample from such a population is correct with strictly smallerprobability than / δ i , because of noise.) Essentially, in phase i , each agent takes γ = O (1 /ǫ ) samplesfrom the population (during γ rounds) and then sets its opinion according to the majority opinion of thesesamples. Despite the noise in the samples, we will prove that, as long as δ i is sufficiently small, this majorityprocess increases the fraction of correct agents, w.h.p., from / δ i to at least / δ i . Moreover, weshall prove that if δ i is large, then the majority process does not decrease δ i too much. Hence, for the nextphase, we can safely assume that either δ i +1 = 2 δ i or that δ i +1 is already sufficiently large.To establish the required boosting, the fact that δ i may be very small prevented us from directly applyingChernoff’s inequality. To see why, let us consider the simpler noiseless case ( ǫ = 1 / ). In this case, eachagent receives γ = O (1) samples, each of which is correct with probability / δ i . We want the majorityof these samples to be correct. That is, we want that the number X of correct samples would be at least γ/ .Note that if δ i is very small, then the expected number of correct samples is only slightly larger than γ/ ,specifically, E ( X ) = γ (1 / δ i ) . Now recall that Chernoff’s inequality states that Pr(
X > (1 − δ ) E ( X )) ≥ − exp ( − δ E ( X ) / . Since we aim to bound
Pr(
X > γ/ using this inequality, we need to take δ suchthat γ/ ≤ (1 − δ ) E ( X ) = γ (1 − δ )(1 / δ i ) , which amount to choosing δ = O ( δ i ) . But with this choiceof δ , Chernoff’s inequality only tells us that Pr(
X > γ/ > − exp ( − O ( δ i )) , which is meaninglesswhen δ i is very small (since this lower bound is even smaller than / ).The aforementioned reasoning required us to come up with more involved arguments. To lower boundthe probability that the majority opinion in the γ samples is correct, we perceive the samples as obtained byan imaginary process composed of two steps taken over γ players. In the first step, for each player we flipa fair coin which determines its opinion (i.e., probability / for having each opinion). Then, at the secondstep, each of the players with the wrong opinion, (independently) has a small probability (close to ǫδ i ) offlipping its opinion to the correct one. The parameters are chosen such that at the end of this imaginaryprocess, the probability that the majority opinion among the γ players is correct is the same as probabilitythat the majority opinion in the original γ samples is correct. To bound the latter probability, we thus analyzethe imaginary two-step process.Informally, the imaginary process allows us to understand the situation in a more modular manner.Indeed, the probability that the first step is successful (yielding a correct majority) is precisely 1/2, and oncethe first step is successful, the second step cannot harm the situation (because in the latter step, only wrongplayers can change their opinion). The probability of being correct after the two-step process is thus / plus the probability of obtaining a wrong configuration in the first step and fixing it in the second step.13et us dwell a bit into this later probability. If the first step turns out to be unsuccessful, then before thesecond step starts there are γ/ x wrong players and γ/ − x correct ones, for some integer x . When x issmall, Stirling’s formula comes handy for bounding from below the probability that such a situation occursafter the first step. Specifically, this probability is Ω( x/ √ γ ) . For such a situation to be fixed, we needthat in the second step, at least x + 1 wrong players flip their opinion. Depending on the particular valueof δ i , we choose a different value for x , and carefully analyze the probability of having a corrective eventin the second step. For example, as mentioned, the probability that the second step starts with γ/ wrong players and γ/ − correct ones (a bias of one player to the wrong opinion) is Ω(1 / √ γ ) = Ω( ǫ ) .In this case ( x = 1 ), the corrective event amounts to having one wrong player among the γ/ wrongplayers changing its opinion in the second step. If δ i is very small, this happens with probability roughly γ · ǫδ i = O ( δ i /ǫ ) . Furthermore, for sufficiently small δ i , the constant factors hidden in the aforementioned Ω and O notations, turn out to be such that, the probability of having both a bias of one player to the wrongopinion in the first step and a corrective event in the second step is at least δ i . Together with the probability(at least / ) that the first step yielded the correct majority opinion to begin with, we get that the probabilityof having a correct opinion after the second step is at least / δ i . Recall, that example was with respectto δ i being very small. In general, regardless of the value of δ i , our analysis makes sure that the majority iscorrect with probability min { / δ i , / } .A direct application of Chernoff’s inequality, relying on the fact that δ i = Ω( p log n/n ) , will then showthat w.h.p., the bias increases from δ i at phase i to at least min { δ i , / } at phase i + 1 . Hence, afterinvoking k = ⌈ log(1 /δ ) ⌉ = O (log n ) phases, the fraction of correct agents becomes bounded away from1/2 by an additive constant. Hence, to achieve high probability that all agents are correct, it is sufficient thatin the last phase, namely phase k + 1 , each agent takes O ( ǫ log n ) samples of the population, and sets itsopinion according to the majority opinion in these samples. As guaranteed by Lemma 2.3, at the end of Stage I, w.h.p., all agents are activated and the bias of theirinitial opinion towards B is Ω( p log n/n ) . Hence, Stage I brings us to an instance of the majority-consensusproblem, where the set A contains the whole population and the majority-bias is Ω( p log n/ | A | ) . Stage IIis meant to solve this problem.Let r = ⌈ /ǫ ⌉ , and let γ = 2 r + 1 (no attempt has been made to minimize the constant factors). Wedefine k = O (log n ) and take Stage II to be composed of k + 1 phases. Each of the first k phases has γ = O (1 /ǫ ) rounds, while phase k + 1 is composed of O ( ǫ log n ) rounds. Essentially, in each phase, agentsrepeatedly send their current opinion. At the end of the phase, agents may choose to update their opinion.Since the opinion of an agent may be updated only at the end of a phase, all messages sent by an agentduring any given phase are the same. For a phase i , let m i denote the number of rounds in the phase (i.e., m i = 2 γ for i = 1 , . . . , k , and m k +1 = O ( ǫ log n ) ). During phase i , an agent that received at least m i / messages is called successful and the messages it received are called samples . Only the successful agentswill update their opinion at the end of the phase, while the rest will remain with their previous opinion. Claim 2.9.
The number of successful agents in each phase is, w.h.p., at least n/ .Proof. In a given round, the probability that a given agent a did not receive a message is (1 − /n ) n − ≤ / . Thus, the expected number of messages received by agent a in a given phase i is E i ≥ m i / . Bychoosing m i large enough, Chernoff’s inequality can be used to guarantee that the probability that agent a isunsuccessful is smaller than c , where c is as small as we want constant. The expected number of unsuccessfulagents is therefore at most cn . As the random variables indicating whether an agent is unsuccessful or14uccessful are negatively-correlated, we can employ Chernoff’s inequality (see Sectione 1.7), to deduce thatw.h.p., the number of successful agents in a phase is at least n/ . The rule of Stage II:
For each round in each phase i , where ≤ i ≤ k + 1 , each agent repeatedlysends out its current opinion. The opinion of an agent in phase 1 of Stage II is its initial opinion. Atthe end of each phase, a successful agent a in the phase will consider its set of samples S a , will selectuniformly at random an arbitrary subset S ′ a ⊆ S a containing precisely m i / samples, and update itsopinion according to the majority opinion in the samples in S a . An unsuccessful agent does not changeits opinion during the phase. Remark 2.10.
We have chosen to let a successful agent choose an arbitrary subset of size m i / amongits samples, and update its opinion according to the majority opinion in this set. For the purposes of thiscurrent section, where a global clock is assumed, all proofs would have carried out in the same manner, hadwe chosen instead, to set this subset as the particular subset containing the first m i / samples. Similarly toRemark 2.1. The reason for choosing an arbitrary random subset of this size is to guarantee that the orderin which the agent receives the samples during the does not influence its actions. This property will be moreimportant in Section, which relaxes the synchronization requirement. Lemma 2.11.
Consider taking γ = 2 r + 1 (noisy) samples from a population whose bias towards thecorrect opinion is at least δ . Then, the probability that the majority of these γ samples is correct is at least min { / δ, / / } . Proof.
Consider the γ = 2 r +1 samples. We say that a sample is correct if it holds the correct opinion B . The γ samples are chosen independently, and uniformly at random, among the population whose bias towardsthe correct opinion is at least δ . Let b = 2 ǫδ . Accounting for the noise in the samples, for each sample, theprobability that the sample is correct is at least: (1 / δ ) · (1 / ǫ ) + (1 / − δ ) · (1 / − ǫ ) = 1 / ǫδ = 1 / b. Note that b may be very small, so directly employing Chernoff’s inequality over the γ samples would notimply the desired bound. Instead, let us look at the following imaginary two-step process that forms anequivalent view of the γ samplings. The imaginary two-step process:
The imaginary process is performed over a set S consisting of γ Boolean players , namely, S = σ , σ , . . . , σ γ . • First step: each player σ j flips a fair coin to form an initial opinion (i.e., a bit in { , } ). • Second step: independently with probability b , each player σ j gets to see the correct opinion B andcorrects its opinion if it was wrong initially (otherwise it remains with its correct opinion).Note that after this two-step process, the probability that a player is correct is precisely − (1 − b ) =1 / b . Thus, the probability that the majority opinion among the γ players is B bounds from below theprobability that the majority of the original γ samples gathered by agent a is B . To lower bound this latterprobability, in what follows, we focus on the γ players, in the two-step process. Let x be a positive integer.Define the following events. 15 C = at the end of the first step, the majority of players in S is correct. • U x = after the first step, the number w of wrong players in S satisfies r + 1 ≤ w ≤ r + x . • F x = in the second step, the number of opinion flips is at least x . • F = the majority opinion at the end of the two-steps is correct.Our goal is to lower bound the probability that F occurs. Note first that Pr( C ) = 1 / . Assume now that C did not occur, hence U x occurred for some x , that is, in S , the first step results in a set W of wrong playerswhose size w satisfies r + 1 ≤ w ≤ r + x . In this case, for F to occur, it is sufficient that event F x wouldoccur in the second step. That is, for every positive integer x , we have: Pr( F ) ≥ Pr( C ) + Pr( F x | U x ) · Pr( U x ) . (7)Stirling’s formula can be used to lower bound the probability that U x occurs, when x is a small integer. Thebound is indicated by the following claim: Claim 2.12.
For ≤ x ≤ √ r , we have Pr( U x ) > x/ √ r. Proof.
For each j , let P ( j ) denote the probability that precisely j players in S hold the wrong opinion afterthe first step. We rely on the fact that the coins tossed in the first step are fair, and on Stirling’s formula toshow that for ≤ i ≤ √ r , we have P ( r + i ) > / √ r . This will establish the claim since for x ≤ √ r ,the probability that Event U x occurs is Pr( U x ) = P xi =1 P ( r + i ) > x/ √ r. The bound on P ( r + i ) can be obtained as follows: P ( r + i ) = 2 − (2 r +1) (cid:18) r + 1 r + i (cid:19) = 2 − (2 r +1) (2 r + 1)!( r − i + 1)!( r + i )! ≥≥ − (2 r +1) (2 r + 1)!( r − √ r + 1)!( r + √ r )! . Applying Stirling’s formula √ π ≤ n ! e − n · n n +0 . ≤ e on the right side of the equation, we get as desired: P ( r + i ) > √ πe · − (2 r +1) (2 r + 1) r +1 . ( r − √ r + 1) r −√ r +1 . ( r + √ r ) r + √ r +0 . = 2 √ πe · r − (2 r +1 . (1 + 0 . /r ) r +1 . ( r − √ r + 1) r −√ r +1 . ( r + √ r ) r + √ r +0 . = 2 √ πe √ r · r − (2 r +1 . (1 + 0 . /r ) r +1 . ( r − √ r ) r −√ r +1 ( r + √ r ) r + √ r +0 . · . r ) r −√ r +1 = 2 √ πe √ r · · e (1 − √ r ) r −√ r +1 (1 + √ r ) r + √ r +0 . · e . = 2 √ πe √ r · − r ) r −√ r +1 (1 + √ r ) √ r − . · e . = 2 √ πe √ r · e − . · e · e . = 2 √ πe . √ r > √ r .
16o successfully use Equation 7, we need to bound from below the value of
Pr( F x | U x ) , that is, theprobability that given U x , at least x players (in W ) flip their opinion in the second step. Claim 2.13. (1) If r ≤ /b then Pr( F | U ) ≥ rb/e . (2) If rb > , then for x ≤ ⌈ rb ⌉ , Pr( F x | U x ) ≥ / .Proof. Recall that in the second step, each of the wrong players flips its opinion with probability b . Observethat Pr( F | U ) is bounded from below by the probability that precisely one of the r + 1 wrong playersin W flipped its opinion in the second step (note, | W | = r + 1 since U occurred). This latter probabilityis ( r + 1) · b (1 − b ) r , which is at least rb/e , if r ≤ /b . This establishes the first part of the claim. Letus now turn to prove the second part of the claim. Assume that rb > . Note that the expected number offlips in W is at least rb > . Chernoff’s inequality therefore implies that the probability that the numberof flips in W is at most rb is bounded from above by /e / , implying that for integer x ≤ ⌈ rb ⌉ , we have: Pr( F x | U x ) ≥ Pr( F ⌈ rb ⌉ | U x ) ≥ − /e / > / . Finally, to establish Lemma 2.11, we combine Equation 7 with Claims 2.12 and 2.13 for different valuesof δ . The case of small δ : Consider the case that δ ≤ ǫ/ . This restriction on δ implies that rb ≤ . In thiscase, the first part of Claim 2.13 tells us that Pr( F | U ) ≥ rb/e . Hence, by Claim 2.12 and Equation 7,we have: Pr( F ) ≥ Pr( C ) + Pr( U ) Pr( F | U ) > / / √ r )( rb/e ) > / δ. The case of medium δ : Consider the case that ǫ/ < δ < / . In this case, we have < rb ≤ √ r − . Let us set x := ⌈ rb ⌉ . Hence, ≤ x ≤ √ r , and we can employ Claim 2.12, yielding Pr( U x ) >x/ √ r . By the second part of Claim 2.13, we obtain: Pr( F ) ≥ Pr( C ) + Pr( U x ) · Pr( F x | U x ) ≥ / x/ √ r ) / ≥ / b √ r/ > / δ. The case of large δ : Consider the case that δ ≥ / . In this case, we set x := ⌈√ r/ ⌉ . Since ⌈√ r/ ⌉ < ⌈ rb ⌉ , the second part of Claim 2.13 implies that Pr( F x | U x ) ≥ / . Hence, we get: Pr( F ) ≥ Pr( C ) +Pr( U x ) · Pr( F x | U x ) ≥ / x/ √ r ≥ / / . This completes the proof of Lemma 2.11.Lemma 2.11 provides a lower bound on the probability that a successful agent is correct at the end of aphase. We are now ready to bound from below the increase in bias that a phase guarantees.
Lemma 2.14.
Consider phase i ≤ k , and assume that the number of correct agents in the beginning of thephase is / δ i , where δ i > c ( p log n/n ) , for sufficiently large constant c . Then, w.h.p., the fraction ofcorrect agents at the end of the phase is at least min { / . δ i , / / } . Proof.
Fix a phase i , for ≤ i ≤ k , and assume that when phase i starts, the fraction of agents havingthe correct opinion is at least / δ i . Note that being successful in the phase is independent from havingthe correct opinion in the beginning of the phase. Since an unsuccessful agent does not change its opinionduring the phase, its probability of being correct at the end of the phase is therefore at least / δ i .Moreover, these probabilities are negatively-correlated. On the other hand Lemma 2.11 shows that eachsuccessful agent in the phase has a probability of at least / { δ i , / } to be correct at the endof the phase. Moreover, the random variables indicating whether the successful agents are correct are againnegatively-correlated. We can thus argue about lower bounds for expectations first, then continue withrelated dominating negatively-correlated variables for which we finally apply standard Chernoff’s bounds.In particular, we first consider the case that δ i ≥ / . In this case, for each agent (whether suc-cessful or unsuccessful) the probability of being correct is at least / / and thus dominated by a17ernoulli random variable with this expectation. As argued before these dominating variables are further-more negatively-correlated. Let I be the number of correct agents. We have E ( I ) ≥ n (1 / / .Taking δ = 1 / , we get that (1 − δ ) E ( I ) > n (1 / / . Applying Chernoff’s inequality to thedominating negatively-correlated random variables we obtain: Pr( I ≤ n (1 / / ≤ e − Ω( nδ i ) . Since δ i > c ( p log n/n ) for sufficiently large c , it follows that, w.h.p., the fraction of correct agents at theend of the phase is at least / / , as required by the lemma.Next, we consider the case that δ i < / . Recall from Claim 2.9 that the number of successfulagents in the phase is, w.h.p, at least n/ . Condition on this event. Recall also that each unsuccessfulagent is correct with probability p u = 1 / δ i and each successful agent is correct with probability p s =1 / { δ i , / } = 1 / δ i .Let u denote the number of unsuccessful agents. Recall that we condition on the highly likely event u ≤ n/ . Let U be the set containing all u unsuccessful agents and additional n/ − u other arbitrarysuccessful agents. Note that U contains precisely n/ agents. Let S be the set of the remaining agents (allof which are successful).Next, let us consider the number I u of incorrect agents in U . Whether or not a given agent in U issuccessful, the probability that this agent is incorrect is dominated by a Bernoulli random variable withprobability of / − δ i . Hence, the expectation of this number is E ( I u ) ≤ n (1 / − δ i ) . Taking δ = δ i / ,we get that (1+ δ ) E ( I u ) ≤ n (1 / − . δ i ) . With these dominating random variables again being negatively-correlated we apply Chernoff’s inequality and obtain: Pr( I u ≥ n / − . δ i )) ≤ e − δ E ( I u ) / = e − Ω( nδ i ) . Therefore, w.h.p., the number I u of incorrect unsuccessful players is at most n (1 / − . δ i ) . We similarlybound the number I s of incorrect agents in S . In particular, we have E ( I s ) ≤ n (1 / − δ i ) . Taking δ = δ i , we have (1 + δ ) E ( I u ) > n (1 / − . δ i ) . Apply Chernoff’s inequality to the dominating negatively-correlated variables gives: Pr( I s ≥ n / − . δ i )) ≤ e − δ E ( I s ) / = e − Ω( nδ i ) . Hence, w.h.p., the number I s of incorrect successful agents in S is at most n (1 / − . δ i ) . It follows thatthe total number of incorrect agents (including both successful and unsuccessful ones) is w.h.p, at most n / − . δ i )) + n / − . δ i ) = n (1 / − . δ i ) . In other words, the fraction of correct agents at the end of the phase is, w.h.p., at least / . δ i , asdesired.Since δ = Ω( p log n/n ) , where the constant factor hiding in the Ω notation is as large as we want,Lemma 2.14 implies the following corollary. Corollary 2.15.
After the first k = Θ(log( p n/ log n )) phases, w.h.p., the fraction of correct agents is atleast / / . In the final phase, namely phase k +1 , each agent collects O ( ǫ log n ) independent samples, uniformly atrandom, from a population whose bias towards the correct opinion is at least / . Assuming the constant18iding behind the O -notation is sufficiently large, Chernoff’s inequality guarantees that, w.h.p., the majorityopinion of such samples is correct. Hence, a union bound argument guarantees that w.h.p, all agents arecorrect at the end of Stage II. Let us now analyze the running time of Stage II. Each of the first k phasestakes γ = O (1 /ǫ ) rounds. Since k = O (log n ) , the number of rounds required to perform the first k phasesis O ( ǫ log n ) . The running time of phase k + 1 is O ( ǫ log n ) . Altogether, we obtain the following. Lemma 2.16.
Stage II takes O ( ǫ log n ) rounds and at the end of the stage all agents are correct, with highprobability. Lemmas 2.3 and 2.16 yield that our algorithm solves the noisy broadcast problem in O ( ǫ log n ) rounds.Since each message is composed of a single bit, and since in each round, each agent can send at most onemessage, we get the bound O ( ǫ n log n ) on the total number of messages and bits sent. Altogether, weobtain our main result. Theorem 2.17.
Consider the fully-synchronous setting and let ǫ be such that /n / − η < ǫ , for somearbitrarily small constant η > . The noisy broadcast problem can be solved using O ( ǫ log n ) rounds, anda total of O ( ǫ n log n ) messages (or bits). Corollary 2.18.
Consider the fully-synchronous setting and let ǫ be such that /n / − η < ǫ , for somearbitrarily small constant η > . Consider the noisy majority-consensus problem with an initial set A of atleast Ω( ǫ log n ) agents and majority-bias of Ω( p log n/ | A | ) . This problem can be solved in O ( ǫ log n ) rounds, and using a total of O ( ǫ n log n ) messages (or bits).Proof. Recall that Claim 2.2 implies that after phase 0 is completed, we are left with solving the noisymajority-consensus problem with an initial set A of agents of size | A | = Θ( ǫ log n ) whose majority-biasis Ω( p log n/ | A | ) . As we saw, this problem is solved by applying the remaining phases i = 1 , . . . T + 1 of Stage I, and then applying Stage II. Specifically, as given by Claims 2.8 and 2.4, phase i of Stage I, foreach i ∈ { , . . . T + 1 } , reduced the problem to the noisy majority-consensus problem with an initial set A i of size A i = Θ (cid:0) ǫ i +2 log n (cid:1) and majority-bias of Ω( p log n/ | A i | ) . Hence, after applying Stage I, wewere left with dealing with the noisy majority-consensus problem with an initial set X of agents composedof all n agents and majority-bias of Ω( p log n/n ) . Solving this latter problem is precisely the objective ofStage II.In light of this, the general case of the noisy majority-consensus problem can be solved as follows.Recall, in this problem we consider an initial subset A of agents of size | A | = Ω( ǫ log n ) and majority-biasof Ω( p log n/ | A | ) . To solve this problem, we first set: i A := log( | A | / log n )2 log(1 /ǫ ) , and then execute phases i A , i A +1 . . . T + 1 of Stage I, and subsequently execute Stage II. In the previous section we considered the fully-synchronous setting where all clocks are set to zero at thebeginning of the execution. In this section we show how to remove this global-clock assumption, consideringthe more standard synchronous setting in which the clock of an agent is set to zero when it receives amessage for the first time (the clock of the initiator is set to zero when the execution starts). The removal19f this assumption will yield an additive increase of O (log n ) in the running time, while preserving theoptimal O ( ǫ n log n ) message complexity.Before completely removing the assumption of a global clock, let us first consider a relaxed versionof it where it is guaranteed that at the beginning of the execution, each clock is initialized to some integerin the range [0 , D ) , for a given D . (In particular, any two clocks are at most D apart.) Recall that thealgorithm mentioned in Section 2 for the fully-synchronous setting considers O (log n ) consecutive phases,where phase i takes place during the time period [ r i , r i + x i ) , for some integers r i and x i (here x i is thelength of phase i , and we have r i +1 = r i + x i ). In Section 2, assuming the global clock assumption, it isguaranteed that all agents execute the same phase at the same time. We now modify that algorithm to fit tothe relax setting where all clocks are initialized to a value in the range [0 , D ) . D In the modified algorithm, each agent will execute phase i as described in Section 2, except that insteadof starting it at time r i it will start it when its own clock shows r i + iD . That is, Agent a will executephase i during the time interval when its own clock shows [ r i + iD, r i + iD + x i ) . Let s (respectively, ℓ ) bethe smallest (and respectively, the largest) value in [0 , D ) such that an agent (active in phase i ) started theexecution with this time on its clock. For the sake of the analysis, assume we start the execution at the global time 0 (in this time, all local clocks are in the range [0 , D ) ). Each agent a will start phase i at some globaltime, not before the time s + r i + iD ≥ r i + iD , and will end it before time ℓ + r i + iD + x i < r i +1 +( i +1) D .Hence, all agents will execute phase i during the global time interval [ r i + iD, r i +1 + ( i + 1) D ) . Note thatthese intervals are disjoint for different values of i . Correctness.
To show that the modified algorithm is correct we compare an execution of this algorithm toan execution of the fully-synchronized algorithm operating under the global clock assumption (as describedand analyzed in Section 2). We assume that the same random choices are made by the message schedulerin both executions. That is, if under the fully-synchronized algorithm, the k ’th message that an agent a sentwas to agent b , then also in the modified algorithm, the k ’th message sent by agent a was to agent b (notethat the timing of this message delivery and its content may potentially differ between executions).Consider an agent a and a phase i of its algorithm. Recall that in both executions, all messages sent byagent a during phase i are the same, essentially containing its opinion in the beginning of the phase (if ithad any, otherwise, it does not sent any messages anyways). Therefore if the opinions of all agents in thebeginning of their phase i are the same, respectively, in both executions, then the contents of the messagessent by an agent in that phase are also the same, respectively, in both executions. Hence, the set of messages(and their contents) received by any agent a in phase i is the same in both executions. Note, however, that theorder in which these messages are received by the agent may differ between the executions. These messageswill be used by the agent to determine its opinion at the end of the phase. We next argue that the fact thatthese messages may arrive at a different order does not impact the decision made by the agent at the endof that phase. This will imply, by induction on the phase numbers, that the two executions are essentiallythe same.Observe that the decisions made by an agent at the end of a phase (for setting or modifying its opinion)are based on the messages it has received in that phase, but are invariant of the order in which they werereceived (see also Remarks 2.1 and 2.10). Indeed, let S be the set of messages that agent a received duringphase i . At the end of the phase, agent a first selects a subset of S of a certain size (this size could be 0,1,or larger), chosen uniformly at random among the subsets of S of this given size, and then sets its opinion20o be the majority opinion in that subset . This implies that there exists a bijective mapping σ i between thesequences of random choices made by the agents in the modified algorithm in phase i and the sequencesof random choices made by the agents in the fully-synchronized algorithm in phase i , such that the samesubsets of messages are being chosen by all agents at the end of phase i , respectively. (Thus σ i , takes intoaccount the different orders in which messages arrive to an agent, for every agent, in the two executions.)This implies that if the opinions of all agents are the same in both executions in the beginning of phase i ,then under σ i , the opinions of all agents are the same at the end of the phase, in both executions. It followsby induction on the phase numbers that there exists a bijective mapping σ := σ ◦ σ ◦ · · · , σ i ◦ · · · betweenthe sequences of random choices made by the agents in the modified algorithm throughout the executionand the sequences of random choices made by the agents in the fully-synchronous algorithm throughoutthe execution, such that the final opinion of each agent is the same in both executions. The correctnessguarantee of the fully-synchronous algorithm therefore implies that for the modified algorithm, w.h.p., allagents output the correct opinion at the end of the execution. Complexities.
Since the number of phases is O (log n ) , we immediately have that the increase in numberof rounds is an additive term of O ( D log n ) rounds. On the other hand, the message complexity remains thesame as in the fully-synchronous case, since we only add waiting rounds over the original fully-synchronousalgorithm. D apart We now claim that if D is initially unbounded, we can easily (and quickly) reduce it to D = 2 log n ,by first performing an activation phase, in which each informed agent broadcasts an arbitrary messagefor n rounds, and resetting the clock of an agent to be 0 after n rounds passed since it heard amessage for the first time. W.h.p., after n rounds all agents have been activated, ensuring that whenthe clocks are initialized again, all clocks are at most n apart. Furthermore, note that the messagesused in this activation phase all reach their destination within n rounds (at most n rounds untilthe agent sending the last message was activated and plus at most n rounds until this agent sent its lastactivation message). Hence, by the time the earliest agent resets its clock to 0, all messages correspondingto the activation phase have reached there destination. This enables us to safely proceed with the simulationabove, assuming D = 2 log n . Hence, we obtain the following. Theorem 3.1.
Consider the synchronous setting. There exists algorithms solving the noisy broadcast prob-lem and the noisy majority-consensus problem (with an initial set of agents A of size | A | = Ω( ǫ log n ) andmajority-bias of Ω( p log n/ | A | ) ). Both these algorithms terminate in O ( ǫ log n + log n ) rounds, and use O ( ǫ n log n ) messages. The term O (log n ) added to the running time in both algorithms in Theorem 3.1 can be reduced ifagents could quickly synchronize their clocks by a smaller factor than O (log n ) . Optimizing this clock-gapbetween agents remains an intriguing question of independent interest. Specifically, in Stage 1, an agent activated in phase i , chooses a single message uniformly at random among the messages ithas received in phase i and sets its initial opinion to the content of that message. In Stage 2, at the end of each phase i , a successfulagent selects a subset of samples of size m i / uniformly at random among the set of samples it has received in that phase, and thenupdate its opinion to be the majority opinion in that subset. Discussion
This paper is a first attempt to study the impact of communication noise on information dissemination prob-lems, using a computational approach. We have presented the Flip model, a basic model of communicationwherein interactions are conveyed across noisy channels of limited capacity. We have then presented ro-bust and simple algorithms that efficiently solve two basic information dissemination problems within themodel’s constraints. Our algorithms suggest balancing between silence and transmission, synchronization,and majority-based decisions as important ingredients towards understanding collective behavior in anony-mous and noisy populations.Our algorithms rely on synchronization. Although it is not realistic to assume that biological ensemblesare highly synchronous, some degree of synchronicity may still exist [13, 40]. (For example, agents couldpotentially differentiate large enough windows of time considering each such window as a round.) Anintriguing question left for future work can be to quantify the minimal degree of synchronisation requiredfor solving the information dissemination problems efficiently.As this is a first attempt at analyzing randomly distorted messages with distributed computing tools, wedid not attempt to describe a specific biological system or identify naturally occurring algorithms. Rather,our results indicate that to understand natural systems one must simultaneously consider communicationnoise, limited messaging alphabet, and algorithm. Typically, works in different fields take only subset ofthese three components into account.
Acknowledgments:
The authors would like to thank Oded Goldreich, Kunal Talwar, James Aspnes, andGeorge Giakkoupis for helpful discussions.
References [1] Yehuda Afek, Noga Alon, Omer Barad, Eran Hornstein, Naama Barkai, and Ziv Bar-Joseph.
A biologicalsolution to a fundamental distributed computing problem.
Science 331:6014, 183–5, (2011).[2] Yehuda Afek, Noga Alon, Ziv Bar-Joseph, Alejandro Cornejo, Bernhard Haeupler, Fabian Kuhn.
Beeping amaximal independent set.
Distributed Computing 26(4), 195–208, (2013).[3] Christina Allen and Charles Stevens.
An evaluation of causes for unreliability of synaptic transmission.
Pro-ceedings of the National Academy of Sciences 91, 10380–10383, (1994).[4] Dana Angluin, James Aspnes, Zo¨e Diamadi, Michael J. Fischer, Ren´e Peralta.
Computation in networks ofpassively mobile finite-state sensors.
Distributed Computing 18(4), 235–253, (2006).[5] Dana Angluin, James Aspnes, Michael J. Fischer, and Hong Jiang.
Self-stabilizing population protocols.
TAAS3(4), (2008).[6] Dana Angluin, James Aspnes, and David Eisenstat.
A simple population protocol for fast robust approximatemajority.
Distributed Computing 21(2), 87–102, (2008).[7] James Aspnes and Eric Ruppert.
An Introduction to Population Protocols.
Bulletin of the EATCS 93, 98–117,(2007).[8] Bonnie L. Bassler and Christopher M. Waters.
Quorum Sensing: Cell-to-Cell Communication in Bacteria.
TheAnnual Review of Cell and Developmental Biology 21, 319–346, (2005).
9] Reuven Bar-Yehuda and Shay Kutten.
Fault Tolerant Distributed Majority Commitment.
J. Algorithms 9(4),568–582 (1988).[10] Joffroy Beauquier, Janna Burman, and Shay Kutten.
Making Population Protocols Self-stabilizing.
SSS 2009:90–104.[11] Luca Becchetti, Andrea E. F. Clementi, Emanuele Natale, Francesco Pasquale, Riccardo Silvestri, and LucaTrevisan.
Simple dynamics for plurality consensus.
SPAA, 247-256, 2014.[12] Ohad Ben-Shahar, Shlomi Dolev, Andrey Dolgin, and Michael Segal.
Direction election in flocking swarms.
AdHoc Networks 12, 250–258 (2014)[13] John Buck.
Synchronous Rhythmic Flashing of Fireflies. II.
The Quarterly Review of Biology 63(3), 265–289,(1988).[14] Henrik Brumm and Hans Slabbekoorn.
Acoustic Communication in Noise.
Advances in the Study of Behavior35, 151–209, (2005).[15] Michael C. Carrol.
The complement system in regulation of adaptive immunity.
Nature Immunology 5, 981–986,(2004).[16] Keren Censor-Hillel, Bernhard Haeupler, Jonathan A. Kelner, and Petar Maymounkov.
Global computationin a poorly connected world: fast rumor-spreading with no dependence on conductance . Proc. of the ACMSymposium on Theory of Computing (STOC), 961–970, (2012).[17] Bernard Chazelle.
Natural algorithms.
Proc. of the ACM-SIAM Symposium on Discrete Algorithms (SODA),422–431, (2009).[18] Colin Cooper, Robert Els asser, and Tomasz Radzik.
The Power of Two Choices in Distributed Voting.
ICALP(2), 435-446, 2014.[19] Alan J. Demers, Daniel H. Greene, Carl Hauser, Wes Irish, John Larson, Scott Shenker, Howard E. Sturgis,Daniel C. Swinehart, and Douglas B. Terry.
Epidemic algorithms for replicated database maintenance . Operat-ing Systems Review 22 no. 1, 8–32, (1988).[20] Dubhashi Devdatt and Ranjan Desh.
Balls and bins: A study in negative dependence.
Random Structures andAlgorithms, 13(2), 99–124, 1998.[21] Krzysztof Diks and Andrzej Pelc.
Optimal adaptive broadcasting with a bounded fraction of faulty nodes .Algorithmica 28 no. 1, 37–50, (2000).[22] Benjamin Doerr, Leslie Ann Goldberg, Lorenz Minder, Thomas Sauerwald, and Christian Scheideler.
Stabilizingconsensus with the power of two choices.
SPAA, 149-158, 2011.[23] Benjamin Doerr and Mahmoud Fouz.
Asymptotically optimal rumor-spreading . Proc. of the International Col-loquium on Automata, Languages, and Programming (ICALP), 502–513, (2011).[24] Robert Els¨asser and Thomas Sauerwald.
On the runtime and robustness of randomized broadcasting . TheoreticalComputer Science 410 no. 36, 3414– 3427, (2009).[25] Yuval Emek and Roger Wattenhofer.
Stone age distributed computing.
Proc. of the ACM SIGACT-SIGOPSSymposium on Principles of Distributed Computing (PODC), 137–146, (2013).[26] Ofer Feinerman, Garrit Jentsch, Karen Tkach, Jesse Coward, Matthew Hathorn, Michael Sneddon, ThierryEmonet, Kendall Smith, and Gregoire Altan-Bonnet.
Single-cell quantification of IL-2 response by effector andregulatory T cells reveals critical plasticity in immune response.
Molecular systems biology 6, 437pp, (2010).[27] Ofer Feinerman and Amos Korman.
Memory Lower Bounds for Randomized Collaborative Search and Impli-cations for Biology. In Proc. 26th International Symposium on Distributed Computing (DISC) , 61–75, 2012.
28] Ofer Feinerman, Amos Korman, Zvi Lotker, and Jean-S´ebastien Sereni.
Collaborative search on the planewithout communication. In Proc. of the 31st ACM Symp. on Principles of Distributed Computing (PODC) ,77–86, 2012.[29] Ofer Feinerman, Joel Veiga, Jeffrey Dorfman, Ronald Germain, and Gregoire Altan-Bonnet.
Variability androbustness in T cell activation from regulated heterogeneity in protein levels.
Science 321(5892), 1081–1084,(2008).[30] Pierre Fraigniaud, and George Giakkoupis.
On the bit communication complexity of randomized rumor-spreading . Proc. of the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 134-143,(2010).[31] Nigel R Franks, Stephen C Pratt, Eamonn B Mallon, Nicholas F Britton and David JT Sumpter.
Informationflow, opinion polling and collective intelligence in househunting social insects.
The Royal Society B 537(1427),1567–1583, (2002).[32] Abbas El Gamal and Young-Han Kim.
Network Information Theory.
Cambridge University Press, 709pp,(2012).[33] Chryssis Georgiou, Seth Gilbert, and Dariusz R. Kowalski.
Meeting the deadline: on the complexity of fault-tolerant continuous gossip.
Distributed Computing 24(5), 223–244 (2011).[34] Chryssis Georgiou, Seth Gilbert, Rachid Guerraoui, and Dariusz R. Kowalski.
Asynchronous gossip.
J. ACM60(2), 11 (2013).[35] Leszek Gasieniec and Andrzej Pelc.
Adaptive broadcasting with faulty nodes . Parallel Computing 22 no. 6,903–912, (1996).[36] George Giakkoupis and Thomas Sauerwald. rumor-spreading and vertex expansion.
Proc. of the ACM-SIAMSymposium on Discrete Algorithms (SODA), 1623–1641, (2012).[37] Bernhard Haeupler.
Analyzing Network Coding Gossip Made Easy.
Proc. of the ACM Symposium on Theoryof Computing (STOC), 293–302, (2011).[38] Bernhard Haeupler.
Simple, Fast, and Deterministic Gossip and rumor-spreading.
Proc. of the ACM-SIAMSymposium on Discrete Algorithms (SODA), 705–716, (2013).[39] Bernhard Haeupler and Dahlia Malkhi.
Optimal Gossip with Direct Addressing.
Proc. of the ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC), 176–185 , (2014).[40] Yuji Ikegaya, Gloster Aaron, Rosa Cossart, Dmitriy Aronov, Ilan Lampl, David Ferster, and Rafael Yuste.
SynfireChains and Cortical Songs: Temporal Modules of Cortical Activity.
Science 304(5670), 559–564, (2004).[41] Kumar Joag-Dev and Frank Proschan.
Negative association of random variables with applications.
The Annalsof Statistics, 286–295, 1983.[42] Soummya Kar and Jose MF Moura.
Distributed Consensus Algorithms in Sensor Networks With ImperfectCommunication: Link Failures and Channel Noise
IEEE Transactions on Signal Processing 57(1), 355–369,(2009).[43] Richard M. Karp, Christian Schindelhauer, Scott Shenker, and Berthold V¨ocking.
Randomized rumor-spreading .Proc. of the IEEE Symposium on Foundations of Computer Science (FOCS), 565–574, (2000)[44] Ralf Koetter and Frank R. Kschischang.
Coding for errors and erasures in random network coding . IEEETransactions on Information Theory 54 no. 8, 3579–3591, (2008)[45] Thierry Lengagne, Thierry Aubin, Jacques Lauga, and Pierre J.Jouventin.
How do king penguins (Aptenodytespatagonicus) apply the mathematical theory of information to communicate in windy conditions?
Proc. of theRoyal Society B: Biological Sciences 266(1429), 1623–1628, (1999).
46] George B. Mertzios, Sotiris E. Nikoletseas, Christoforos Raptopoulos, and Paul G. Spirakis.
Natural models forevolution on networks.
Theor. Comput. Sci. 477, 76–95 (2013).[47] Othon Michail, Ioannis Chatzigiannakis, and Paul G. Spirakis.
Mediated population protocols. Theor. Comput.Sci.
Spread of epidemic disease on networks.
Physical Review E66(16128), (2002).[49] Mauro Mobilia.
Does A Single Zealot Affect an Infinite Group of Voters
Physical Review Letters 91(028701),(2003)[50] Mauro Mobilia, A. Petersen, and Sidney Redner.
On the role of zealotry in the voter model.
Journal of StatisticalMechanics, P08029, (2007).[51] Todd Moon.
Error Correction Coding: Mathematical Methods and Algorithms.
Wiley-Interscience, 800pp,(2005).[52] Alessandro Panconesi and Aravind Srinivasan.
Randomized distributed edge coloring via an extension of theChernoff-Hoeffding bounds.
SIAM J. Comput. 26, pp. 350–368, 1997.[53] David Peleg.
Distributed Computing: A Locality-Sensitive Approach . SIAM, (2000).[54] Boris Pittel.
On spreading a rumor.
SIAM Journal on Applied Mathematics 47 no. 1, 213–223, (1987).[55] Nitzan Razin, Jean-pierre Eckmann, and Ofer Feinerman.
Desert ants achieve reliable recruitment across noisyinteractions.
Journal of the Royal Society Interface 10(20130079), (2013).[56] Gilbert Roberts.
Why individual vigilance increases as group size increases . Animal Behaviour 51, 1077–1086,(1996).[57] Claude Shannon.
A Mathematical Theory of Communication . Bell System Technical Journal 27(3), 379–423,(1948).[58] David JT Sumpter, Jens Krause, Richard James, Iain D. Couzin, and Ashley JW Ward.
Consensus DecisionMaking by Fish.
Current Biology 22(25), 1773–1777, (2008).[59] Yuri Sykulev, Michael Joo, Irina Vturina, Theodore J. Tsomides, and Herman N Eisen.
Evidence that a SinglePeptide-MHC Complex on a Target Cell Can Elicit a Cytolytic T Cell Response . Immunity 4(6), 565–571,(1996).[60] Benjamin Doerr, Anna Huber, and Ariel Levavi.
Strong robustness of randomized rumor-spreading protocols .Proc. of the International Symposium on Algorithms and Computation (ISAAC), Springer, 2009, pp. 812–821.[61] Edward O. Wilson and Bert Holldobler.
The Superorganism: The Beauty, Elegance, and Strangeness of InsectSocieties.