A Classification of Weak Asynchronous Models of Distributed Computing
AA Classification of Weak Asynchronous Models ofDistributed Computing
Javier Esparza
Technische Universität München, [email protected]
Fabian Reiter
LIGM, Université Gustave Eiffel, [email protected]
Abstract
We conduct a systematic study of asynchronous models of distributed computing consisting ofidentical finite-state devices that cooperate in a network to decide if the network satisfies a givengraph-theoretical property. Models discussed in the literature differ in the detection capabilities ofthe agents residing at the nodes of the network (detecting the set of states of their neighbors, orcounting the number of neighbors in each state), the notion of acceptance (acceptance by halting in aparticular configuration, or by stable consensus), the notion of step (synchronous move, interleaving,or arbitrary timing), and the fairness assumptions (non-starving, or stochastic-like). We study theexpressive power of the combinations of these features, and show that the initially twenty possiblecombinations fit into seven equivalence classes. The classification is the consequence of severalequi-expressivity results with a clear interpretation. In particular, we show that acceptance byhalting configuration only has non-trivial expressive power if it is combined with counting, and thatsynchronous and interleaving models have the same power as those in which an arbitrary set ofnodes can move at the same time. We also identify simple graph properties that distinguish theexpressive power of the seven classes.
Theory of computation → Automata extensions; Theory ofcomputation → Concurrency; Theory of computation → Distributed computing models
Keywords and phrases
Asynchrony, Concurrency theory, Weak models of distributed computing
Related Version
To appear in the proceedings of CONCUR 2020 (published by LIPIcs).
Funding
This work was supported by the ERC project PaVeS (Advanced Grant 787367).
Acknowledgements
The authors thank Ahmed Bouajjani for many interesting discussions, andseveral anonymous reviewers for their helpful feedback.
Distributed computing is increasingly interested in the study of networks of natural or artificialdevices, like molecules, cells, microorganisms, or nano-robots. These devices have very limitedcomputational and communication capabilities, and are indistinguishable. In particular, adevice cannot recognize whether its current communication partner is the same as a pastone. This stands in stark contrast to the devices of standard computer networks, which hasmotivated researchers to question the suitability of traditional distributed computing modelsfor the study of these networks, and to propose new ones. Examples include populationprotocols [3, 1], chemical reaction networks [14], networked finite state machines [7], theweak models of distributed computing of [9], and the beeping model [5]. A survey discussingmany of them, and more, can be found in [12].All these models share several common features, introduced to capture the limitations ofthe devices [7]: the network can have an arbitrary topology; all nodes of the network havea finite number of states, independent of the size of the network or its topology; all nodes a r X i v : . [ c s . F L ] J u l A Classification of Weak Asynchronous Models of Distributed Computing run the same protocol; and state changes only depend on the states of a bounded number ofneighbors, again independent of the size of the network.Unfortunately, despite this very substantial common ground, the models still differ inmany aspects, which makes it hard to compare results across them, or decide which featuresare essential for a particular result. A study of the models allows one to identify four specificjunctions at which they choose different paths:
Detection . In some models, agents can only detect the existence of neighbors in a certainstate [9]. In others, they can count their number, up to a fixed threshold [7, 9]. Forexample, in biological models, cells communicate by emitting special kinds of proteins,and detecting them; in some models the cells may detect the presence of the proteinwhen its concentration exceeds a given threshold, while in others they are able to detectdifferent concentration levels.
Acceptance . Some models compute by stable consensus , which requires all nodes toeventually agree on the outcome of the computation (but the nodes do not need to know that consensus has been reached) [3, 1, 14], while others require the nodes to reach aconsensus in a halting configuration [9]. Acceptance by stable consensus is computationallypowerful, since it permits the algorithm designer to concentrate on ensuring that everybad input is eventually rejected; declaring all non-rejecting states accepting ensures thatevery good input is eventually accepted.
Selection . In some models, at each step a scheduler chooses an arbitrary set of nodes tomake a step [7, 13], while in others it is exactly one, or exactly one pair of neighboringnodes [3, 1, 14]. We call the latter exclusive or interleaving models. Intuitively, interleavingmodels are useful when it can be assumed that process steps are much faster than the timeinterval between them, while the former policy does not need this assumption. In addition,they help the algorithm designer, who can assume that agents act in mutual exclusion.(Examples where this is useful can be found in the proofs of Propositions 16 and 20.)Another common option for selection is the synchronous execution model [9], where allnodes are selected in each step. Again this can be helpful for designing algorithms, but itis incompatible with exclusive selection. Fairness . Some models use fairness assumptions designed to model or approximatestochastic behavior [3, 1, 14], while others choose minimal notions, like “all nodes make astep infinitely often”, which only assume the absence of crash faults (see, e.g., [8, 10]).Stochastic-like assumptions are reasonable for biological or chemical models, but canbe too strong for networks of artificial nodes, which may follow non-random executionpolicies. Stochastic models may be able to solve problems that cannot be solved withweaker fairness assumptions.The goal of this paper is to explore the space of models spanned by the above parameters, andcompare their computational power within a specific framework. For this we use distributedautomata , a generic formalism for the description of finite-state distributed algorithms. Suchan automaton consists of a set of rules that tell the nodes of a graph how to change theirstate depending on the states of their neighbors. Intuitively, the automaton describes analgorithm that allows the nodes of an input graph to decide, in a distributed way, whether thegraph satisfies a given property. The computational power of a class of distributed automatais then given by the class of graph languages recognized by the automata in the class, or, inother words, by the graph properties that the class of automata can decide.We start with twenty classes of distributed automata, and show that with respect to theircomputational power, they fall into seven different classes. This reduction is a consequenceof two results presented in this paper: (1) acceptance by halting configuration only has non- . Esparza and F. Reiter 3 trivial expressive power if it is combined with counting; (2) both interleaving and synchronousselection have the same power as liberal selection where arbitrarily many nodes can move atthe same time (and therefore, one can design an automaton in an interleaving or synchronousmodel, which is less error prone, and then translate it to a liberal model). Some of thesimulations we design to prove the results are of independent interest. In particular, we giveexplicit constructions showing how to simulate interleaving models by non-interleaving ones.The paper is organized as follows. Section 2 introduces distributed automata and theirvariants. Sections 3 to 5 show that the variants collapse to at most the seven equivalenceclasses mentioned above. Section 6 contains separation results showing that the seven classesare different. Finally, Section 7 presents further results on their expressive power. Proofsmissing or only sketched in the main text can be found in the Appendix.
Given sets
X, Y , we denote by X the power set of X , and by X Y the set of functions Y → X .We define [ m : n ] := { i ∈ Z | m ≤ i ≤ n } and [ n ] := [0 : n ], for any m, n ∈ Z such that m ≤ n .Angle brackets indicate excluded endpoints, e.g., h m : n ] := [ m − n ] and [ n i := [0 : n − Λ be a finite set. A ( Λ -labeled, undirected) graph is a triple G = ( V, E, λ ), where V isa finite nonempty set of nodes , E is a set of undirected edges of the form e = { u, v } ⊆ V such that u = v , and λ : V → Λ is a labeling . Isomorphic graphs are considered to be equal. Convention : Throughout the paper, all graphs have at least two nodes and are connected.
Distributed automata take a graph as input, and either accept or reject it. To define themwe first introduce distributed machines.
Distributed machines.
Let Λ be a finite set of symbols and let β ∈ N + . A (distributed)machine with input alphabet Λ and counting bound β is a tuple M = ( Q, δ , δ, Y, N ), where Q is a finite set of states , δ : Λ → Q is an initialization function , δ : Q × [ β ] Q → Q is a transition function , and Y, N ⊆ Q are two sets of accepting and rejecting states, respectively.The function δ updates the state of a node v based on the number of neighbors v has in eachstate, but it can only detect if v has 0 , , . . . , ( β − β neighbors in a given state. Selections, schedules, configurations, runs, and acceptance. A selection of a Λ -labeled graph G = ( V, E, λ ) is a set S ⊆ V , and a schedule of G is an infinite sequenceof selections σ = ( S , S , S , . . . ) ∈ (2 V ) ω . Intuitively, the selection S t is the set of nodesactivated by the scheduler at time t .Let M = ( Q, δ , δ, Y, N ) be a distributed machine with input alphabet Λ . A configuration of M on G is a mapping C : V → Q . Given a configuration C and a node v ∈ V , we let N Cv : Q → [ β ] denote the function that assigns to each state q the number of neighbors of v that are in state q up to threshold β , i.e., min (cid:8) β, card( { u | { u, v } ∈ E ∧ C ( u ) = q } ) (cid:9) . Wecall N Cv the β -bounded multiset of states of v ’s neighbors.For any selection S , we define the successor configuration of C via S to be the configuration succ δ ( C, S ) that one obtains from C if all nodes in S evaluate the transition function δ simultaneously while the remaining nodes keep their current state. Formally, for all v ∈ V , succ δ ( C, S )( v ) = ( C ( v ) if v / ∈ Sδ (cid:0) C ( v ) , N Cv (cid:1) if v ∈ S . A Classification of Weak Asynchronous Models of Distributed Computing
This brings us directly to the notion of a run. Given a schedule σ = ( S , S , S , . . . ), the run of M on G scheduled by σ is the infinite sequence ρ = ( C , C , C , . . . ) of configurationsthat are defined inductively as follows, where ◦ denotes function composition, and t ∈ N : C = δ ◦ λ and C t +1 = succ δ ( C t , S t ) . A configuration C is accepting if C ( v ) ∈ Y for every v ∈ V , and rejecting if C ( v ) ∈ N forevery v ∈ V . A run ρ = ( C , C , C , . . . ) of M on G is accepting if there is a time t ∈ N suchthat C t is accepting for every t ≥ t . In other words, a run is accepting if from some timeon it only visits accepting configurations. Similarly, ρ is rejecting if eventually all visitedconfigurations are rejecting. Following [3], we call this acceptance by stable consensus . Distributed automata.
Not every schedule of a distributed machine models an execution;for example, schedules in which a node is never activated are usually considered illegal.We assume that distributed machines are controlled by a scheduler that ensures that themachine executes a legal run. Formally, a scheduler is a pair Σ = ( s, f ), where s is a selectionconstraint that assigns to every graph G = ( V, E, λ ) a set s ( G ) ⊆ V of permitted selections such that every node v ∈ V occurs in at least one selection S ∈ s ( G ), and f is a fairnessconstraint that assigns to every graph G a set f ( G ) ⊆ s ( G ) ω of fair schedules of G . We callthe runs with schedules in f ( G ) fair runs (with respect to Σ ).A distributed automaton is a pair A = ( M, Σ ), where M is a machine and Σ is a schedulersatisfying the consistency condition : for every graph G , either all fair runs of M on G areaccepting, or all fair runs of M on G are rejecting. Intuitively, the machine is “immune” tothe scheduler because its answer is independent of the scheduler’s choices. This formalizesthe standard notion of “asynchronous distributed algorithm”. Notice that the consistencycondition is a very strong semantic requirement. Although we will not do so in this paper,one can prove that it is undecidable whether a given pair ( M, Σ ) satisfies it. A accepts G if every fair run of A on G is accepting, and rejects G otherwise. Thelanguage L ( A ) recognized by A is the set of graphs it accepts. Two automata are equivalentif they recognize the same language. We classify automata according to four criteria: detection capabilities, acceptance condition,selection constraint, and fairness constraint. The first two criteria concern the distributedmachine, and the other two the scheduler. For each criterion, we investigate some of themajor options that have been considered in the literature.
Detection.
In some models, agents can only detect the existence of neighbors in a certainstate. This corresponds to non-counting machines , i.e., machines with counting bound β = 1.Other models can detect the number of neighbors up to a higher bound [9]. Acceptance.
As mentioned above, distributed machines accept by stable consensus. This isthe acceptance condition of population protocols and chemical reaction networks [3, 1, 14].Other models consider a notion of acceptance where each node explicitly decides to acceptor reject [9]. This notion is captured by halting automata. A machine M is halting if itstransition function does not allow the nodes to leave accepting or rejecting states, i.e., if δ ( q, P ) = q for every q ∈ Y ∪ N and every β -bounded multiset P ∈ [ β ] Q . In halting machines,each node knows whether the input graph will be accepted the moment it enters an acceptingor rejecting state. Indeed, by the consistency condition, in every fair run, eventually eitherall nodes occupy accepting states, or all nodes occupy rejecting states. Since nodes can never . Esparza and F. Reiter 5 leave an accepting state once they enter it, each node that enters such a state knows that allother nodes will eventually do likewise. The same applies to rejecting states. Selection.
A scheduler Σ = ( s, f ) is synchronous on G = ( V, E, λ ) if s ( G ) = { V } . Intuitively,at every step all nodes make a move. Σ is exclusive or interleaving-based on G if s ( G ) = {{ v } | v ∈ V } . Intuitively, at every step exactly one node makes a move, i.e., nodes executesteps in mutual exclusion. Finally, Σ is liberal on G if s ( G ) = 2 V . Intuitively, at everystep an arbitrary subset of nodes makes a move. A scheduler is called synchronous if it issynchronous on every graph. Exclusive and liberal schedulers are defined analogously. Fairness.
A schedule σ = ( S , S , . . . ) of a graph G is weakly fair if for every node v of G ,there exist infinitely many indices t such that v ∈ S t . In other words, a schedule is weaklyfair if every node is active infinitely often. A scheduler Σ = ( s, f ) is weakly fair if f ( G )contains precisely the weakly-fair schedules of s ( G ) ω for every graph G . This is the weakestfairness constraint one can impose on distributed automata; it only excludes runs in which anode crashes, and does not participate in the computation anymore.With respect to a given selection constraint s , a schedule σ = ( S , S , . . . ) ∈ s ( G ) ω of agraph G is strongly fair if for every finite sequence ( T , . . . , T n ) ∈ s ( G ) ∗ there exist infinitelymany indices t such that ( S t , S t +1 , . . . , S t + n ) = ( T , T , . . . , T n ). Intuitively, strong fairnessrequires that every possible finite sequence of selections is scheduled infinitely often. If everynode is selected independently with positive probability, stochastic schedules are almostsurely strongly fair. A scheduler Σ = ( s, f ) is strongly fair if for every graph G , the set f ( G )contains precisely the strongly-fair schedules of s ( G ) ω . (cid:73) Remark 1.
Whether a schedule σ of a graph G = ( V, E, λ ) is strongly fair or not dependson s ( G ). For example, if s ( G ) = { V } , then the synchronous schedule V ω is strongly fair, butif s ( G ) = 2 V , then it is not.Our notion of strong fairness implies an apparently stronger one, used frequently in theliterature, stating that in a strongly fair run, a sequence of configurations that is enabledinfinitely often must occur infinitely often: (cid:73) Lemma 2.
Let A be a strongly fair automaton and ( D , . . . , D n ) be a sequence of configura-tions of A such that D i +1 is the successor configuration of D i via some selection S i permittedby A , for i ∈ [0 : n i . For any fair run ρ = ( C , C , . . . ) of A , if C i = D for infinitely manyindices i ∈ N , then ( C j , . . . , C j + n ) = ( D , . . . , D n ) for infinitely many indices j ∈ N . The classification above yields 24 classes of automata (four classes of machines and sixclasses of schedulers). To assign mnemonics to them, we use lowercase letters for the mostrestrictive machine variants (i.e., non-counting and halting), and the same letters in uppercasefor the other variants. With schedulers we proceed the other way round, assigning lowercaseletters to the most liberal variants (i.e., liberal selection and weak fairness). Intuitively, dueto the consistency condition, the more liberal a scheduler, the harder it is for an automatonto recognize a graph language, because more runs have to yield the same result. So, looselyspeaking, we expect the expressive power to increase with the number of uppercase letters.
Detection Acceptance Selection Fairness d : non-counting a : halting s : liberal f : weak D : counting A : stable consensus S : exclusive F : strong $ : synchronous We denote each class of automata by a string wxyz ∈ { d , D } × { a , A } × { s , S , $ } × { f , F } .The class of languages recognized by wxyz -automata is denoted G ( wxyz ). The following A Classification of Weak Asynchronous Models of Distributed Computing lemma states all relations between language classes that follow directly from the definitions.Statement 1 abbreviates “ G ( d xyz ) ⊆ G ( D xyz ) for all x ∈ { a , A } , y ∈ { s , S , $ } , z ∈ { f , F } ”.We use the same convention in Statements 2 to 5, and throughout the paper. That is, anystatement with four-letter strings containing the wildcard symbol * must be expanded intothe list of all statements that can be obtained by replacing identically positioned occurrencesof * with the same letter. (cid:73) Lemma 3. G ( d*** ) ⊆ G ( D*** ) , G ( *a** ) ⊆ G ( *A** ) , G ( ***f ) ⊆ G ( ***F ) , G ( **sf ) ⊆ G ( **Sf ) , G ( **sf ) ⊆ G ( **$f ) , G ( **$F ) ⊆ G ( **$f ) . Lemma 3 leads to the diagram in Figure 1, showing 20 automata classes (we have G ( **$f ) = G ( **$F ) by Statements 3 and 6). An arrow between two classes means that everygraph language recognized by the source class is also recognized by the target class.The reader probably finds Figure 1 very complicated. We also do, and this was themotivation for the present paper. How many of these classes are really different? In the nextsections we show that classes with the same color have the same expressivity, and thus thatthe diagram of Figure 1 collapses to the one of Figure 4, which contains only seven classes. dasfDasf dasF dAsfDasF DAsf dAsFDAsF daSfDaSf daSF dASfDaSF DASf dASFDASFda$*Da$* dA$*DA$*D F A S$ d e t e c t i o n a cc e p t a n c e s e l e c t i o n f a i r n e ss Figure 1
Initial classification of the models according to the class of graph languages theyrecognize. Arrows indicate inclusion between classes of languages. The diagram can be thought of aslying in four-dimensional space, where each dimension represents one of our four parameters. Thevectors of the “coordinate system” are labeled with the statement number of Lemma 3 that provesthe inclusions in the corresponding direction. In the coming sections, classes are shown to be equalif and only if they have the same color, reducing the 20 classes to 7, as shown in Figure 4. Thismeans in particular that we completely eliminate the dimension of selection (shown in dotted lines),leaving us with only three dimensions.
We prove that das* -automata have no expressive power, and the results in Sections 4 and 5will generalize this to da** -automata. Intuitively, if agents cannot count their neighbors, andmust reach a halting configuration, then they cannot distinguish any two graphs. Formally,a graph property is trivial if either every graph satisfies it, or no graph satisfies it. We have: . Esparza and F. Reiter 7 (cid:73)
Theorem 4.
Every das* -automaton recognizes a trivial graph property.
Proof sketch.
By Statement 3 of Lemma 3, it suffices to prove the claim for dasF -automata.So let A be a dasF -automaton, and let G and H be two graphs (connected and with at leasttwo nodes by convention). Assume that A accepts G but rejects H . By the consistencycondition, all fair runs of A on G accept, and all fair runs on H reject. Now let ρ G and ρ H be any such runs, and let t ∈ N be a time at which all nodes in ρ G and ρ H have halted. Wedefine a new graph K that consists of t copies { G i } i ∈ [1 : t ] and { H i } i ∈ [1 : t ] of G and H , withadditional edges defined as follows. For each node w X of the original graph X ∈ { G, H } , wedenote its copy in X i by w Xi , where i ∈ [1 : t ]. Let u G and v G be two adjacent nodes of G ,and u H and v H be two adjacent nodes of H . We add the connecting edges { u Xi , v Xi +1 } for all i ∈ [1 : t i and X ∈ { G, H } , as well as the edge { u Gt , u Ht } . This is illustrated in Figure 2. G G G t H t H H u G v G u G v G . . . u Gt v Gt u Ht v Ht . . . u H v H u H v H Figure 2
Graph K used in the proof of Theorem 4. We show that there is a fair run ρ of A on K that neither accepts nor rejects. It followsthat A does not satisfy the consistency condition, contradicting the hypothesis. Since A isa non-counting automaton, initially every node w Xi except for u Gt and u Ht “sees” the sameneighborhood as the corresponding node w X in the original graph X . Only the two nodes u Gt and u Ht may have a different neighborhoods than u G and u H , and this might affect theirbehavior starting at time 1. Their different behavior can be propagated to other nodesin subsequent rounds, but it takes time before it reaches every node. We exploit this toconstruct ρ in such a way that some nodes of K (those of G ) reach an accepting state, whileothers (those of H ) reach a rejecting state. Since A is a halting automaton, these nodes willnever change their state again, and so the run is neither accepting nor rejecting. (cid:74) We show that every class with synchronous selection is equivalent to the correspondingclass with liberal selection. Albeit non-trivial, this is easy to prove by a standard techniqueof distributed computing known as alpha synchronizer . (The term was introduced in [4],but a similar idea appeared earlier in cellular automata theory [11].) Given a machine M = ( Q, δ , δ, Y, N ), we define a machine ˜ M = ( ˜ Q, ˜ δ , ˜ δ, ˜ Y , ˜ N ) such that for every graph G ,the unique synchronous run of M on G accepts (rejects) iff every weakly fair run ρ of ˜ M on G accepts (rejects). The gadget achieving this is called a “synchronizer”, because it ensuresthat the nodes of G behave “as in the synchronous case”, even when selection is liberal.The set of states of ˜ M is ˜ Q := Q × Q × { , , } . Given ( q, q , i ) ∈ ˜ Q , we call q the past M -state , q the current M -state , and i the phase . The initialization function is given by˜ δ ( a ) := ( δ ( a ) , δ ( a ) , δ , let v be a node in state( q, q , i ). If v is selected by the scheduler, its next state is determined as follows:If at least one neighbor of v is in phase ( i −
1) mod 3, then v does not change state.Intuitively, if some neighbor is still one phase behind, then v waits for it to “catch up”.If every neighbor of v is in phase i or ( i + 1) mod 3, then v moves to ( q , q , ( i + 1) mod 3),where q is defined as follows. Let N v be the set of neighbors of v , and for each A Classification of Weak Asynchronous Models of Distributed Computing u ∈ N v , let ( q u , q u , i u ) be the state of u . Further, let q u := q u if i u = i , and q u := q u if i u = ( i + 1) mod 3, and let M be the multiset over Q containing for each u ∈ N v a copyof the state q u . (Loosely speaking, M contains the current M -states of the neighborsof v that are in the same phase as v , and the past M -states of the neighbors that areone phase ahead, i.e., the states they had when they were in the same phase as v ). Let M β be given by M β ( q ) = min { β, M β ( q ) } . We define q := δ ( q, M β ); loosely speaking, v moves to the state it would move to in M if all its neighbors were in the same phase.Let ˜ ρ be any weakly-fair run of ˜ M on a graph G . Fix a node v of G , and extract from ˜ ρ the sequence q q q q q q . . . q i q i q i . . . , where q ij denotes the current M -state of v immediately after entering phase j for the i -th time. Now, let ρ be the unique synchronousrun of M on G , and let q q q . . . be the sequence obtained by projecting ρ onto the statesof v . It is easy to see that these two sequences coincide. By the definition of stable acceptance,˜ ρ accepts iff ρ accepts, and rejects iff ρ rejects. Using this construction, we obtain: (cid:73) Theorem 5.
For every **$* -automaton there is an equivalent **s* -automaton.
In this section, we obtain the rather surprising result that the computational power of aclass of automata does not increase if we restrict its schedulers to interleaving ones (whichguarantees that agents act in mutual exclusion with all other agents).
We start by considering strongly fair models, i.e., we compare a class of the form **sF withthe corresponding class **SF . On an intuitive level, their equivalence might be less surprisingthan the subsequent result presented in Section 5.2 because strong fairness provides a way tobreak symmetry, which can be exploited to simulate exclusivity. Nevertheless, neither classtrivially subsumes the other, so we have to prove inclusions in both directions. (cid:73)
Theorem 6.
For every **sF -automaton there is an equivalent **SF -automaton.
Proof sketch.
Given a **sF -automaton A , we construct a **SF -automaton B such that forall input graphs G , every strongly fair run of B on G simulates a strongly fair run of A on G .The difficulty lies in the fact that A and B do not share the same notion of strong fairnessbecause they have different selection constraints. While A ’s liberal scheduler guaranteesthat arbitrary sequences of selections will occur infinitely often, B ’s exclusive scheduler canselect only one node at a time. To simulate A ’s behavior with B , we adapt the synchronizerfrom Section 4. Just like there, nodes keep track of their previous and current state in A ,as well as the current phase number modulo 3. However, instead of updating their state inevery phase, they only do so if an additional activity flag is set. Thus, we can simulate anarbitrary selection S by raising the flags of exactly those nodes that lie in S . The outcomeof a phase simulated in this way will be the same as if all the nodes in S made a transitionsimultaneously. The main issue is how to set the activity flags in each phase in such a waythat every finite sequence ( S , . . . , S n ) of selections is guaranteed to occur infinitely often.We show that this is possible, exploiting the fact that B ’s scheduler is strongly fair. (cid:74)(cid:73) Theorem 7.
For every **SF -automaton there is an equivalent **sF -automaton. . Esparza and F. Reiter 9
Proof sketch.
First, we note that the only way exclusivity could possibly be useful is tobreak symmetry between adjacent nodes. This is because for an independent set (i.e., a setof pairwise non-adjacent nodes), the order of activation is irrelevant: whether the scheduleractivates them all at once or one by one in some arbitrary order, the outcome will alwaysbe the same. Consequently, to simulate a run with exclusivity, it suffices to simulate a runwhere no two adjacent nodes are active at the same time. We provide a simple protocolthat makes use of the strong fairness constraint (in an environment with liberal selection) toensure that if a node wants to execute a transition, then it will eventually be able to do sowhile all its neighbors remain passive. (cid:74)
We now show that even in the absence of strong fairness, the restriction to interleavingschedulers does not increase expressive power. At first sight, this may be quite surprisingbecause exclusivity inherently breaks symmetry, whereas an automaton with liberal selectionand weak fairness can always be assumed to run synchronously and thus be incapable ofbreaking symmetry. In fact, it is easy to come up with examples of automata that exploitexclusivity to ensure termination. (cid:73)
Proposition 8.
For every **sf -automaton, there exists a **Sf -automaton that recognizesthe same graph language but makes use of exclusive selection to ensure termination. If runsynchronously, it never terminates (and hence it is not a valid **sf -automaton).
However, although the automata described in Proposition 8 make use of exclusivity, theydo not really benefit from it; they only recognize languages that can also be recognizedby liberal automata. As we will see in Theorem 11, this observation can be generalized toarbitrary **Sf -automata. Intuitively, since exclusivity does not add any expressive power, itcan in a certain sense be simulated without needing to break symmetry.The proof of Theorem 11 is based on the notion of Kronecker cover. The
Kroneckercover (also known as bipartite double cover ) of a graph G = ( V, E, λ ) is the bipartite graph G = ( V , E , λ ) where V = V × { , } , E = S { u,v }∈ E { { ( u, , ( v, } , { ( u, , ( v, } } , and λ (( v, i )) = λ ( v ) for all ( v, i ) ∈ V . An example is provided in Figure 3. uwxv u, w, x, v, u, w, x, v, Figure 3
A graph (on the left) and its Kronecker cover (on the right).
The Kronecker cover in Figure 3 is connected because the nodes in { u, v, w } × { , } forma cycle. The following lemma generalizes this observation. (cid:73) Lemma 9.
The Kronecker cover of a connected graph G is connected if and only if G contains a cycle of odd length, (i.e., if and only if G is non-bipartite). If a Kronecker cover is connected, then it constitutes a legal input for a distributedautomaton. The next key lemma shows that, in this case, a weakly fair automaton cannoteven distinguish between a graph and its Kronecker cover. (cid:73)
Lemma 10.
For every ***f -automaton A with input alphabet Λ and every non-bipartite Λ -labeled graph G , A accepts G if and only if it accepts the Kronecker cover of G . We can now prove the main technical result of this section: (cid:73)
Theorem 11.
For every **Sf -automaton there is an equivalent **sf -automaton.
Proof sketch.
Given a **Sf -automaton A , we construct an equivalent **$f -automaton B (i.e., a synchronous automaton). This is sufficient to prove the claim, because we know fromTheorem 5 that B can always be simulated by a **sf -automaton using a synchronizer.Let G be an input graph for A . If we were guaranteed that the labels of G define aproper vertex coloring (i.e., edges connect nodes of different colors), then the task wouldbe straightforward. Indeed, since each color of a proper coloring represents an independentset, B could simply operate in cyclically repeating phases, each one activating precisely thenodes of one of the colors. As explained in the proof of Theorem 7, such a run is equivalentto a run of an exclusive scheduler that activates the nodes of each independent set one byone (in some arbitrary order).This approach can be adapted to bipartite graphs because a bipartite graph has exactlytwo possible 2-colorings. However, computing one of the two 2-colorings would require tobreak symmetry, which a **$f -automaton cannot do. So instead, the states of automaton B have two components, one corresponding to each coloring, and nodes update both componentswhen they are activated.Using these ideas, we construct B in such a way that it recognizes the same bipartitegraphs as A . Then we use Lemmas 9 and 10 to prove that L ( A ) = L ( B ). Indeed, if G is notbipartite, then by Lemma 9, its Kronecker cover G is connected and therefore constitutesa legal input for a distributed automaton. By Lemma 10, B accepts G if and only if itaccepts G . Since Kronecker covers are bipartite by definition, we know from the abovediscussion that B accepts G if and only if A accepts G . Finally, again by Lemma 10, A accepts G if and only if it accepts G . From this chain of equivalences, we can concludethat G is accepted by B if and only if it is accepted by A . (cid:74) In Sections 3, 4 and 5 we have shown that the classes of graph languages in Figure 1 collapseto at most the seven classes shown on the left of Figure 4. In this section we show that theseven classes are all different. For this we examine four graph languages, and determinewhich classes are expressive enough to recognize them: B : The language of graphs with set of labels { black , white } having at least one black node. S : The language of star graphs, i.e., the set of all connected, unlabeled graphs in whichone node (the center ) has degree at least 2, and all others (the leaves ) have degree 1. C : The language containing one single graph, namely the cycle C with three nodeslabeled by 0, 1, and 2, respectively. S even : The language of even stars, i.e., the graphs of S with an even number of leaves.The results are summarized on the right of Figure 4. Recognizing properties of labeled graphs: the language B The main difference between the two types of acceptance is that halting automata cannotrecognize properties that require nodes to wait an unlimited amount of time for someinformation that may never arrive, while even the simplest class of automata accepting bystable consensus can recognize some of those properties, such as B . . Esparza and F. Reiter 11 dasfDasf dAsfDasF DAsf dAsFDAsF Class
B S C S even DAsF (cid:51) (cid:51) (cid:51) (cid:51)
DasF (cid:55) (cid:51) (cid:55) (cid:51)
DAsf (cid:51) (cid:51) (cid:55) (cid:55) dAsF (cid:51) (cid:51) (cid:51) (cid:55)
Dasf (cid:55) (cid:51) (cid:55) (cid:55) dAsf (cid:51) (cid:55) (cid:55) (cid:55) dasf (cid:55) (cid:55) (cid:55) (cid:55)
Figure 4
On the left, quotient of the classification of Figure 1. On the right, four graph languages,and the automata models capable of recognizing them. (cid:73)
Proposition 12. B is recognizable by a dAsf -automaton, but not by any *a** -automaton. Proof sketch.
The dAsf -automaton has two states, called black and white . The initial stateof a node is given by its label. Black nodes remain always black, and white nodes with ablack neighbor become black. Since graphs are connected by assumption, if a graph containssome black node then eventually all nodes are black, otherwise all nodes stay white.For the second part, one can show that
DasF -automata cannot distinguish between anentirely white cycle and a sufficiently long path graph whose nodes are all white except fortwo black nodes at the endpoints. (The argument is similar to the proof of Theorem 4.) (cid:74)
Recognizing properties of unlabeled graphs: the language S We show in Proposition 13 that dAsf -automata cannot recognize any non-trivial property of unlabeled graphs (which we identify with the labeled graphs whose nodes all carry the samelabel). That is, while dAsf -automata can recognize properties of the labeling of a graph, theycannot recognize any non-trivial property of its structure . Then we show in Proposition 14that the strong fairness of dAsF -automata allows them to recognize S . (cid:73) Proposition 13. dAsf -automata can only recognize trivial properties of unlabeled graphs.In particular, S is not recognizable by a dAsf -automaton. Proof.
Let A be a dAsf -automaton, and let ρ = ( C , C , . . . ) be the synchronous run of A onan unlabeled graph G = ( V, E ), i.e., the run scheduled by V ω . We show that A either acceptsall unlabeled graphs, or rejects all unlabeled graphs. Since V ω is a weakly fair schedule, ρ isa fair run, and so by the consistency condition A accepts G iff ρ is accepting. Since G isunlabeled, in C every node of G is in the same state q , which is independent of G . Moreover,since ρ is synchronous and A is non-counting, in each configuration C i every node of G is inthe same state q i , which is also independent of G . So the states visited by ρ are independentof G , and so A either accepts all unlabeled graphs, or rejects all unlabeled graphs. (cid:74)(cid:73) Proposition 14. S is recognizable by a dAsF -automaton and by a Dasf -automaton.
Proof sketch.
We give a dAsF -automaton that recognizes S . The states of the automatonare pairs ( d, c ), where d ∈ { leaf , center , unknown , neither } is the estimate of v , and c ∈ { , } is its color . Every time a node is selected it flips its color. When a node with estimate unknown sees two neighbors with different colors, it switches to center , and if from then onit sees a neighbor with estimate center , it moves to neither . Strong fairness is crucial for correctness: by Lemma 2, it ensures that a node that is not a leaf will eventually be selectedin a configuration in which at least two of its neighbors have different colors.Now we give a Dasf -automaton with β = 2 that recognizes S . Since β = 2, a node candetermine for each state q if it has 0, 1, or at least 2 neighbors in q . The automaton’s statesare { init , leaf , non-leaf , accept , reject } . Initially all nodes are in state init . The nodes updatetheir estimates depending on the number of neighbors (0, 1, or at least 2) in each state. (cid:74) Symmetry breaking: the language C We show that the language C requires both acceptance by stable consensus and strongfairness to be recognizable. Intuitively, both of them are required to distinguish C fromarbitrarily long cycles that repeat the labeling of C cyclically. (cid:73) Proposition 15. C is recognizable by a dAsF -automaton, but neither by DA*f -automatanor by
Da*F -automata.
Proof sketch.
Our dAsF -automaton for C checks two conditions: first, that the input graphis a cycle with cyclic labeling 0 − −
2, and second, that it contains exactly one node labeledby 2 (which implies that the cycle has length 3). For both conditions, we use a similartrick as in Proposition 14, relying on acceptance by stable consensus and strong fairness toeventually break symmetry between otherwise indistinguishable nodes. To verify the secondcondition, each node labeled by 2 successively sends signals in both directions through thecycle, and checks that those signals always come back from the expected direction.For the second part of the claim, we show that
DA*f - and
Da*F -automata cannot dis-tinguish C from C , the hexagon whose nodes are labeled by 0 − − − − − ρ of such an automaton on C , we construct a fair run ρ on C that “duplicates” the behavior of ρ . In the case of Da*F -automata, this duplicationis performed only until ρ has reached a halting configuration (because otherwise ρ wouldviolate the strong fairness constraint). (cid:74) Counting neighbors modulo a number: the language S even Since counting automata can only count up to a threshold β , no node can directly observethat it has an even number of neighbors. This makes the language S even rather difficult torecognize. We now show that the combination of counting and strong fairness can do the job.The proof also provides a good example where exclusivity helps to design an algorithm. (cid:73) Proposition 16. S even is recognizable by a DasF -automaton.
Proof sketch.
In Proposition 14 we have exhibited a
Dasf -automaton A recognizing S . Wenow give a DaSF -automaton B that uses counting, exclusivity, and strong fairness to furtherdecide if the number of leaves is even. Loosely speaking, B first executes A ; if A rejects,then B rejects, because the graph is not even a star. If A accepts, then B enters a new phaseduring which it counts the number of leaves modulo 2. By Theorem 7, B is equivalent to a DasF -automaton.We can assume that when A accepts, all nodes are labeled with either leaf or center (the unique non-leaf). We give an informal description of B . Leaves can be in states visible , invisible , dead , even , or odd . While leaves have not been counted by the center, they alternatebetween the states visible and invisible . The center only increments its modulo-2 counter ifexactly one leaf is visible . After a leaf is counted, it moves to dead . When all leaves becomedead, i.e., when they have all been counted, the center decides whether to accept or reject;the leaves read the decision from the counter, and move to even or odd accordingly. (cid:74) . Esparza and F. Reiter 13 The next two results show that recognizing S even needs both counting and strong fairness. (cid:73) Proposition 17. S even is not recognizable by DA*f -automata.
Proof.
We show that for every
DA*f -automaton A there exist stars G and G such thatexactly one of G and G belongs to S even , but A either accepts both of them or rejects bothof them. Let β ≥ A ’s counting bound, and let G and G be the stars with β + 1 and β + 2 leaves, respectively. Now consider the synchronous runs ρ and ρ of A on G and G . Bysymmetry, and since the number of leaves exceeds β in both G and G , at every time t ∈ N ,the center is in the same state in ρ and ρ , and likewise all leaves are in the same state. Sothe sequences of states visited by the center and the leaves are the same in both ρ and ρ ,and therefore ρ is accepting iff ρ is accepting. (cid:74)(cid:73) Proposition 18. S even is not recognizable by dA*F -automata. Proof sketch.
Given a dA*F -automaton A , the proof identifies an even number n , dependingon A , such that if A accepts the star with n leaves, then it cannot reject the star with n + 1leaves. The proof is involved, and can be found in the Appendix. (cid:74) As a first application of our results, we investigate the expressivity of our models for graphlanguages that depend only on the labeling function of a graph, and not on its topology.Given a Λ -labeled graph G = ( V, E, λ ), where Λ = { ‘ , . . . , ‘ k } , let G : Λ → N be themapping that assigns to each label ‘ the number G ( ‘ ) of nodes of V such that λ ( v ) = ‘ . Alanguage is Presburger-definable if there is a formula ϕ ( x , . . . , x k ) of Presburger arithmeticsuch that a Λ -labeled graph G belongs to the language if and only if ϕ ( G ( ‘ ) , . . . , G ( ‘ k ))holds. An example of such a language is B , the set of graphs that contain a black node.We show that DAsF -automata recognize all Presburger languages, but none of the othersix classes do. The negative part of the result follows easily from the table in Figure 4. (cid:73)
Proposition 19.
There exist Presburger-definable languages that are not recognizable by d*** -, *a** -, or ***f -automata. Proof.
By Proposition 12, *a** -automata cannot recognize the language B , which isPresburger-definable. Furthermore, by Propositions 14, 17 and 18, dA*F - and DA*f -automatacan recognize the language S of star graphs but not the language S even of stars with aneven number of leaves. This implies that dA*F - and DA*f -automata cannot recognize thePresburger-definable language of graphs with an odd number of nodes, because the inter-section of this language with S is equal to S even , and languages recognizable by distributedautomata are closed under intersection (by a standard product construction). (cid:74) For the positive part, we proceed in three steps: First, following [1] and Section 5 of [3], weintroduce graph population protocols , a graph variant of the well-known population protocolmodel introduced in [2, 3]. Then we recall a result of [3] showing that graph populationprotocols recognize all Presburger-definable languages. Finally, we show that every graphpopulation protocol can be simulated by a
DAsF -automaton.Our definition of graph population protocols is equivalent to that of [1, 3], but reuses thenotation of Section 2 as far as possible. A graph population protocol Π = ( Q, δ , δ, Y, N ) isdefined like a DASF -automaton with machine M = Π , except for the following differences:The transition function is of the form δ : Q → Q . A selection of a graph G = ( V, E, λ ) is an ordered pair S = ( u, v ) ∈ V of adjacent nodes(instead of a singleton { u } ⊆ V ), and the selection constraint on G is { ( u, v ) | { u, v } ∈ E } . C t ( v ) is defined inductively as follows, for t ∈ N and v ∈ V : C ( v ) = δ ( λ ( v )) and C t +1 ( v ) = δ (cid:0) C t ( v ) , C t ( u ) (cid:1) fst if S t = ( v, u ) for some u , δ (cid:0) C t ( u ) , C t ( v ) (cid:1) snd if S t = ( u, v ) for some u , C t ( v ) otherwise,where P fst and P snd denote the first and second component of a pair P .So, intuitively, the scheduler selects two adjacent nodes, which update their states accordingto δ . The definitions of all other relevant notions remain the same. This holds in particularfor acceptance by stable consensus and strong fairness (which are baked into the model), andthe consistency condition. Standard population protocols correspond to graph populationprotocols on complete graphs, where every pair of distinct nodes is connected by an edge.It is shown in [3] that standard population protocols recognize all Presburger-definablelanguages. Further, Theorem 7 of [3] shows that every language recognized by populationprotocols is also recognized by graph population protocols. Loosely speaking, given apopulation protocol, one constructs the protocol on graphs in which, when an edge of thegraph is selected, either the two nodes connected by it interact as in the population protocol, orthey swap their states. By strong fairness, the states of the nodes can “move around the graph”,and any pair of states eventually interacts infinitely often. The choice between interacting orswapping is nondeterministic, but it can be simulated by deterministic transitions (see [3]).Therefore, in order to show that DA*F -automata recognize all Presburger-definable languages,it suffices to simulate graph population protocols with distributed automata. As in the proofof Proposition 16, we make use of exclusivity to simplify the construction. (cid:73)
Proposition 20.
For every graph population protocol there is an equivalent
DA*F -automaton.
Proof sketch.
We present a simulation that runs a population protocol on a distributedautomaton. To this end, the automaton has to simulate a scheduler that selects orderedpairs of adjacent nodes instead of arbitrary sets of nodes. For any pair ( u, v ) that is selectedto perform a transition, let us call u the initiator and v the responder of the transition. ByTheorem 7, we may assume that the automaton’s scheduler selects a single node in each step.The main idea is as follows: When a node u is selected and sees that it can become theinitiator of a transition, it declares its intention to do so by raising the flag “?”. Then u waitsuntil some neighbor v is selected and raises the flag “!”, which signals that v wants to becomethe responder of a transition. If this happens, the next time u is selected, it computes itsnew state according to the state of v and the transition function of the population protocol,but also keeps its old state in memory so that v can still see it. After that, v also updates itsstate, and finally u deletes its old state, which completes the transition. Throughout thisprotocol, the nodes verify that they have exactly one partner during each transition. If thiscondition is violated, they raise the error flag “ ⊥ ” and abort their current transition. (cid:74)(cid:73) Corollary 21.
DA*F -automata recognize all Presburger-definable languages.
We have conducted an extensive comparative analysis of the expressive power of weakasynchronous models of distributed computing. Our analysis has reduced the initial “jungle” . Esparza and F. Reiter 15 of twenty different models to only seven. This reduction in complexity is achieved byTheorems 4, 5, 6, 7, and 11, all of which have a clear and intuitive interpretation.We have also shown that the seven classes are distinct, and have identified inclusionsand non-inclusions between them. However, two inclusions remain open: Are
Dasf or DAsf included in dAsF ? Intuitively, this asks if strong fairness and acceptance by stable consensuscan be used to simulate counting. We can provide a positive answer for graphs of boundeddegree (a limitation common in practice), because in this case even dA*F and
DAsF coincide. (cid:73)
Proposition 22.
For every
DA*F -automaton A and every k ∈ N there is a dA*F -automaton B equivalent to A on graphs of maximum degree k . However, for arbitrary graphs we conjecture that neither
Dasf nor
DAsf are included in dAsF .Finally, we have made a first step towards characterizing the graph languages recognizableby the different classes, by transferring a characterization for population protocols.As a last note, observe that our results hold for decision problems on undirected graphsthat can be solved by consensus in the framework of distributed automata. Several ofour constructions (e.g., those in Theorems 5 and 7) rely on bidirectional communication,which is not guaranteed on directed graphs. Furthermore, exclusive selection leads to highercomputational power for non-decision problems. For instance, it can be used to solve thevertex coloring problem on graphs of bounded degree (by a standard greedy algorithm),which, for symmetry reasons, is impossible in a model with synchronous selection.
References Dana Angluin, James Aspnes, Melody Chan, Michael J Fischer, Hong Jiang, and René Peralta.Stably computable properties of network graphs. In
International Conference on DistributedComputing in Sensor Systems , pages 63–74. Springer, 2005. Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computationin networks of passively mobile finite-state sensors. In
PODC , pages 290–299. ACM, 2004. Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computationin networks of passively mobile finite-state sensors.
Distributed Computing , 18(4):235–253,2006. Baruch Awerbuch. Complexity of network synchronization.
J. ACM , 32(4):804–823, 1985.URL: https://doi.org/10.1145/4221.4227 , doi:10.1145/4221.4227 . Alejandro Cornejo and Fabian Kuhn. Deploying wireless networks with beeps. In
DISC ,volume 6343 of
Lecture Notes in Computer Science , pages 148–162. Springer, 2010. Reinhard Diestel.
Graph Theory, 5th Edition , volume 173 of
Graduate texts in Mathematics .Springer, 2017. Yuval Emek and Roger Wattenhofer. Stone age distributed computing. In
PODC , pages137–146. ACM, 2013. Nissim Francez.
Fairness . Texts and Monographs in Computer Science. Springer, 1986. Lauri Hella, Matti Järvisalo, Antti Kuusisto, Juhana Laurinharju, Tuomo Lempiäinen, KerkkoLuosto, Jukka Suomela, and Jonni Virtema. Weak models of distributed computing, withconnections to modal logic.
Distributed Computing , 28(1):31–53, 2015. Daniel Lehmann, Amir Pnueli, and Jonathan Stavi. Impartiality, justice and fairness: Theethics of concurrent termination. In
ICALP , volume 115 of
Lecture Notes in Computer Science ,pages 264–277. Springer, 1981. Katsuhiko Nakamura. Synchronous to asynchronous transformation of polyautomata.
J. Com-put. Syst. Sci. , 23(1):22–37, 1981. URL: https://doi.org/10.1016/0022-0000(81)90003-9 , doi:10.1016/0022-0000(81)90003-9 . Saket Navlakha and Ziv Bar-Joseph. Distributed information processing in biological andcomputational systems.
Commun. ACM , 58(1):94–102, 2015. doi:10.1145/2678280 . Fabian Reiter. Asynchronous distributed automata: A characterization of the modal mu-fragment. In
ICALP , volume 80 of
LIPIcs , pages 100:1–100:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. David Soloveichik, Matthew Cook, Erik Winfree, and Jehoshua Bruck. Computation withfinite stochastic chemical reaction networks.
Natural Computing , 7(4):615–633, 2008. . Esparza and F. Reiter 17
A AppendixA.1 Proofs of Section 2 (cid:73)
Lemma 2.
Let A be a strongly fair automaton and ( D , . . . , D n ) be a sequence of configura-tions of A such that D i +1 is the successor configuration of D i via some selection S i permittedby A , for i ∈ [0 : n i . For any fair run ρ = ( C , C , . . . ) of A , if C i = D for infinitely manyindices i ∈ N , then ( C j , . . . , C j + n ) = ( D , . . . , D n ) for infinitely many indices j ∈ N . Proof.
Let E = { E , . . . , E k } be the set of configurations that occur infinitely often in ρ .Notice that these configurations can all reach each other because otherwise they could notoccur infinitely often. The assumption is that D ∈ E . We construct a finite sequence σ of selections permitted by A such that for every i ∈ [1 : k ], the sequence of configurationsvisited starting from E i and applying σ contains either the subsequence ( D , . . . , D n ), orsome configuration E i / ∈ E . This suffices to prove the claim because from a certain point on, ρ visits only configurations in E , and by strong fairness the schedule fragment σ is guaranteedto be chosen infinitely often by the scheduler. Since any configuration E i / ∈ E may only occurfinitely often, the only possibility is that the subsequence ( D , . . . , D n ) occurs infinitely often.It remains to construct a suitable sequence σ . We proceed by induction, constructing aseries of sequences σ , σ , . . . , σ k such that for j ∈ [0 : k ], the sequence σ j satisfies the desiredproperty for every i ∈ [1 : j ]. It then suffices to choose σ = σ k . As the base case, we set σ = ε (the empty sequence). Now, given σ j , we distinguish two cases in order to construct σ j +1 . Ifstarting from E j +1 and applying σ j the automaton visits some configuration E j +1 / ∈ E , thenwe simply set σ j +1 = σ j . Otherwise, let E j +1 be the final configuration reached from E j +1 by applying σ j . Since E j +1 ∈ E and D ∈ E , there exists a sequence of selections σ thatleads the automaton from E j +1 to D . Therefore, if starting from E j +1 , the automatonapplies the schedule fragment σ j +1 = σ j · σ · S · · · S n − , then it traverses a sequence ofconfigurations ending with ( D , . . . , D n ). Moreover, since σ j is a prefix of σ j +1 , the propertyalready established for σ j with respect to E , . . . , E j also holds for σ j +1 . (cid:74)(cid:73) Lemma 3. G ( d*** ) ⊆ G ( D*** ) , G ( *a** ) ⊆ G ( *A** ) , G ( ***f ) ⊆ G ( ***F ) , G ( **sf ) ⊆ G ( **Sf ) , G ( **sf ) ⊆ G ( **$f ) , G ( **$F ) ⊆ G ( **$f ) . Proof. Non-counting automata are a subclass of counting automata. Halting automata are a subclass of automata accepting by stable consensus. Let A = ( M, s, f ) be a ***f -automaton, and let G = ( V, E, λ ) be a graph. The set f ( G )contains the weakly-fair runs of s ( G ) ω . Now consider A = ( M, s, f ), where f ( G ) containsthe strongly-fair runs of s ( G ) ω . Since the set of permitted selections is the same for A and A ,we have f ( G ) ⊆ f ( G ). Therefore, since A satisfies the consistency condition, so does A ,and thus A is a ***F -automaton with L ( A ) = L ( A ). Let A = ( M, s, f ) be a **sf -automaton, and let G = ( V, E, λ ) be a graph. We have s ( G ) = 2 V , and f ( G ) contains the weakly-fair runs of s ( G ) ω . Let s ( G ) = {{ v } | v ∈ V } , andlet f ( G ) be the weakly-fair runs of s ( G ) ω . We have f ( G ) ⊆ f ( G ). Proceed now as in The argument is fully analogous to that of , the only difference being that s ( G ) = { V } . Let A = ( M, s, f ) be a **$F -automaton. We have s ( G ) = { V } . Further, the run ρ scheduled by V ω is strongly fair (because V is the only possible selection). So f ( G ) = { ρ } .Let A = ( M, s, f ) be the unique **$f -automaton with machine M . Since the run ρ scheduledby V ω is also weakly fair, we have f ( G ) = { ρ } = f ( G ). It follows that L ( A ) = L ( A ). (cid:74) A.2 Proofs of Section 3 (cid:73)
Theorem 4.
Every das* -automaton recognizes a trivial graph property.
Proof.
By Statement 3 of Lemma 3, it suffices to prove the claim for dasF -automata. So letus consider a dasF -automaton A , and assume for the sake of contradiction that there existtwo graphs G and H such that A accepts G and rejects H . Let ρ G = ( C G , C G , . . . ) and ρ H = ( C H , C H , . . . ) be strongly fair runs of A on G and H , respectively. By the consistencycondition, ρ G is accepting and ρ H is rejecting. Based on that, we will construct a newgraph K and a strongly fair run ρ of A on K that is neither accepting nor rejecting. Thismeans that A does not satisfy the consistency condition, and therefore does not qualify as adistributed automaton, a contradiction.We start by constructing K . Let t ∈ N be a time at which all nodes in ρ G and ρ H havehalted (i.e., all nodes in C Gt and C Ht have reached an accepting or rejecting state). Ournew graph K consists of t copies { G i } i ∈ [1 : t ] of G and t copies { H i } i ∈ [1 : t ] of H , which areconnected as follows. For each node w X of the original graph X ∈ { G, H } , we denote itscopy in X i by w Xi , where i ∈ [1 : t ]. Let u G and v G be two adjacent nodes of G , and u H and v H be two adjacent nodes of H . (Recall that all graphs are assumed to be connected andhave at least two nodes.) In addition to the edges in each copy X i , graph K also contains theconnecting edges { u Xi , v Xi +1 } for all i ∈ [1 : t i and X ∈ { G, H } , as well as the edge { u Gt , u Ht } .An illustration of this construction is provided in Figure 5. G G G t H t H H u G v G u G v G . . . u Gt v Gt u Ht v Ht . . . u H v H u H v H Figure 5
Graph K used in the proof of Theorem 4. The important feature of K is that every node w Xi except for u Gt and u Ht has a neighbor-hood equivalent to the neighborhood of the corresponding node w X in the original graph X .This is because A is a non-counting automaton, where each node can only see the set of statesof its neighbors, without being able to count them. So initially, the additional edges betweendifferent copies of the same graph X do not change the “perception” of the nodes theyconnect. However, the two nodes u Gt and u Ht may have a different neighborhoods than u G and u H , and this might affect their behavior starting at time 1. Their different behavior canbe propagated to other nodes in subsequent rounds, but this propagation takes time beforeit can reach nodes in the extreme parts of the graph.We now construct a suitable run ρ = ( C , C , . . . ) of A on K . During the first t steps, ρ tries to copy the behavior of ρ G and ρ H . More precisely, let σ G and σ H be schedules thatschedule ρ G and ρ H , respectively. We use them to define a schedule σ of K that schedules ρ :at every time r ∈ [0 : t i , each copied node w Xi is selected by σ if and only if the originalnode w X is selected by σ X , where i ∈ [1 : t ] and X ∈ { G, H } . Note that this does not violatethe strong fairness constraint because we have only fixed a finite prefix of σ . We can thereforeextend σ in such a way that it satisfies the strong fairness constraint.It remains to show that ρ is neither accepting nor rejecting. For this, we prove by inductionover r that for all r ∈ [0 : t ], i ∈ [1 : t − r ], and X ∈ { G, H } , every copied node w Xi in ρ attime r is in the same state as the original node w X in ρ X at time r , i.e., C r ( w Xi ) = C Xr ( w X ).This obviously holds for r = 0, since every copy w Xi has the same label as w X . For r ∈ [1 : t ], . Esparza and F. Reiter 19 the induction hypothesis tells us that at time r −
1, each copy w Xi with i ∈ [1 : t − r + 1] is inthe same state as w X , and if i ≤ t − r , then w Xi also sees the same set of states as w X in itsneighborhood. Moreover, by the definition of σ , node w Xi is selected if and only if w X isselected. Hence, provided i ∈ [1 : t − r ], the two nodes are also in the same state at time r .Since at time t all nodes of G are in an accepting state in ρ G , and all nodes of H are in arejecting state in ρ H , the same holds in ρ for the copies of those nodes in G (the “left-most”copy of G ) and H (the “right-most” copy of H ). And since A is a halting automaton, thesenodes will never change their state again. But this means that ρ never reaches a stableconsensus, and therefore that it is neither accepting nor rejecting. (cid:74) A.3 Proofs of Section 4 (cid:73)
Theorem 5.
For every **$* -automaton there is an equivalent **s* -automaton.
Proof.
Let A = ( M, s, f ) be a **$* -automaton, and let G = ( V, E, λ ) be a graph. Let˜ A = ( ˜ M , ˜ s, ˜ f ), where ˜ M is as described in Section 4, ˜ s is liberal, and ˜ f is weakly (strongly)fair if f is so. By the consistency condition, the unique run ρ of A on G is either acceptingor rejecting. By the definition of ˜ M , and since all runs of ˜ f ( G ) are at least weakly fair, if ρ is accepting then every fair run of ˜ A is accepting, and if ρ is rejecting then every fair run of˜ A is rejecting. So ˜ A also satisfies the consistency condition, and L ( A ) = L ( ˜ A ). (cid:74) A.4 Proofs of Section 5.1 (cid:73)
Theorem 6.
For every **sF -automaton there is an equivalent **SF -automaton.
Proof.
Given a **sF -automaton A , we construct a **SF -automaton B such that for all inputgraphs G , every strongly fair run of B on G simulates a strongly fair run of A on G . Since A satisfies the consistency condition by hypothesis, this property implies that B does too, andmoreover that B accepts a graph if and only if A accepts it. The difficulty lies in the fact that A and B do not share the same notion of strong fairness because they have different selectionconstraints. While A ’s liberal scheduler guarantees that arbitrary sequences of selections willoccur infinitely often, B ’s exclusive scheduler can select only one node at a time.To simulate A ’s behavior with B , we slightly adapt the synchronizer construction fromSection 4. Just like there, nodes keep track of their previous and current state in A , aswell as the current round number modulo 3. However, instead of updating their state inevery round, they only do so if an additional activity flag is set. Thus, we can simulate anarbitrary selection S by raising the flags of exactly those nodes that lie in S . The outcomeof a round simulated in this way will be the same as if all the nodes in S made a transitionsimultaneously.Now, the main issue is how to set the flags in each round in such a way that every finitesequence ( S , . . . , S n ) of selections is guaranteed to occur infinitely often. To achieve this,we take advantage of the fact that B ’s scheduler is strongly fair with respect to exclusiveselection. We use the following (deterministic) rules: If node v is selected while it is inround i mod 3 and none of its neighbors are yet in round ( i + 1) mod 3, then v raises itsflag; the next time v is selected and allowed to move, it will simulate a transition of A , lowerits flag, and move to round ( i + 1) mod 3. Otherwise, if v is selected when its flag is downand some of its neighbors have already reached the next round, it simply moves to round( i + 1) mod 3 without simulating a transition. Formally, if the machine of A is M = ( Q, δ , δ, Y, N ) with input alphabet Λ and countingbound β , we define the machine of B as M = ( Q , δ , δ , Y , N ), where Q = Q |{z} previous × Q |{z} current × { , , } | {z } round × {⊥ , >} | {z } flag ,Y and N are defined analogously, and δ ( a ) = ( δ ( a ) , δ ( a ) , , ⊥ ) for all a ∈ Λ . The transitionfunction δ is described as follows. Let v be a node, and assume it is selected by the scheduler.In case v is in state ( q, q , i, ⊥ ):if none of v ’s neighbors are yet in round ( i + 1) mod 3, then v moves to ( q, q , i, > );else, if some neighbor of v is still in round ( i −
1) mod 3, then v stays in ( q, q , i, ⊥ );else, v moves to state ( q , q , ( i + 1) mod 3 , ⊥ ).In case v is in state ( q, q , i, > ):if some neighbor of v is still in round ( i −
1) mod 3, then v stays in ( q, q , i, > );else, v moves to ( q , q , ( i + 1) mod 3 , ⊥ ), where q = δ ( q , P ) and P is the β -boundedmultiset consisting of the current states of the neighbors who are in round i , and theprevious states of the neighbors who are in round ( i + 1) mod 3.Notice that the above construction allows the scheduler of B to choose an arbitraryselection S in each round. For instance, the scheduler can first bring all nodes to round i mod 3,next select all nodes in S (one by one) to raise their flags, then select those same nodes againso that they can perform their transitions and move to round ( i + 1) mod 3, and finally selectall the remaining nodes to bring them to the next round as well. To prevent nodes outsideof S from being activated, the scheduler has to select them in some order that ensures thatat least one of their neighbors is already in the next round (for example, a breadth-firstor depth-first order starting from the nodes in S ). Since the scheduler is strongly fair, byLemma 2, every finite sequence of selections appears infinitely often. (cid:74)(cid:73) Theorem 7.
For every **SF -automaton there is an equivalent **sF -automaton.
Proof.
First, we note that the only way exclusivity could possibly be useful is to breaksymmetry between adjacent nodes. This is because for an independent set (i.e., a set ofpairwise non-adjacent nodes), the order of activation is irrelevant: whether the scheduleractivates them all at once or one by one in some arbitrary order, the outcome will alwaysbe the same. More precisely, if we consider a graph G = ( V, E, λ ), a configuration C on G ,and some independent set of nodes U ⊆ V , then the scheduler can choose any sequence ofselections ( S , . . . , S n ) such that S i ∈ [1 : n ] S i = U and card( { i ∈ [1 : n ] | v ∈ S i } ) = 1 for all v ∈ U . Regardless of the scheduler’s choice, the configuration C reached from C via theschedule fragment ( S , . . . , S n ) will always be the same. Consequently, to simulate a runwith exclusivity, it suffices to simulate a run where no two adjacent nodes are active at thesame time.We now describe a simple protocol that makes use of the strong fairness constraint (in anenvironment with liberal selection) to ensure that if a node wants to execute a transition,then it will eventually be able to do so while all of its neighbors remain passive. Suppose thatan active node v wants to transition from state q to state q . To this end, it first goes intoan intermediate state ( q, q ) that declares this intention. Then, the next time v is activatedby the scheduler, it checks that none of its neighbors are in an intermediate state of the form( p, p ). If the check passes, v switches to state q . Otherwise, it goes back to state q and triesagain the next time it is activated. By Lemma 2, the strong fairness constraint guaranteesthat v will infinitely often be able to execute a transition. . Esparza and F. Reiter 21 More formally, given a **SF -automaton with machine M = ( Q, δ , δ, Y, N ) and countingbound β , we can simulate it by a **sF -automaton with machine M = ( Q , δ , δ , Y , N ),where Q = Q ∪ ( Q × Q ) , Y = Y ∪ ( Y × Y ) , N = N ∪ ( N × N ) ,δ is the extension of δ to the codomain Q , and δ is defined as follows: For q, q ∈ Q and P ∈ [ β ] Q such that P contains no state ( p, p ) ∈ Q , we have δ ( q, P ) = ( q, δ ( q, P )) and δ (( q, q ) , P ) = q , and for q, q ∈ Q and P ∈ [ β ] Q such that P contains at least one state ( p, p ) ∈ Q , we have δ ( q, P ) = q and δ (( q, q ) , P ) = q. The first case corresponds to the situation where a node can make progress because noneof its neighbors are in an intermediate state, whereas the second case corresponds to thesituation where a node must wait for some neighbors to either complete or abort their currenttransition attempt. (cid:74)
A.5 Proofs of Section 5.2 (cid:73)
Proposition 8.
For every **sf -automaton, there exists a **Sf -automaton that recognizesthe same graph language but makes use of exclusive selection to ensure termination. If runsynchronously, it never terminates (and hence it is not a valid **sf -automaton).
Proof.
We first describe the machine of a very simple daSf -automaton A that recognizesthe trivial language of all unlabeled graphs but relies on exclusive selection to terminate.It has the state set Q = { p, q, h } , where p is initial, and h is halting and accepting. Thetransition function δ is defined as follows: if v and all its neighbors are in state p , then v moves to q ; if v and all its neighbors are in state q , then v moves to p ; otherwise, v moves to h .For every unlabeled graph G , in the synchronous run of A on G all nodes keep alternatingforever between states p and q (recall that graphs are connected and have at least two nodes),whereas in a run with exclusive selection, all nodes eventually end up in the accepting state h .Now, using a standard product construction, we can easily transform any **sf -automaton B into an equivalent **Sf -automaton C whose machine never halts under synchronous execution: C simply simulates A and B in parallel and accepts precisely when both accept. (cid:74)(cid:73) Lemma 9.
The Kronecker cover of a connected graph G is connected if and only if G contains a cycle of odd length, (i.e., if and only if G is non-bipartite). Proof. If G = ( V, E, λ ) does not contain any cycle of odd length, it is easy to see thatits Kronecker cover consists of two disjoint copies of G . Indeed, since containing no oddcycle is equivalent to being bipartite (see, e.g., [6, Prp. 1.6.1]), we know that V can bepartitioned into two sets V and V such that every edge of G connects a node in V to onein V . Hence, in the Kronecker cover G , we obtain one copy of G over the set of nodes( V × { } ) ∪ ( V × { } ) and another (disjoint one) over the set ( V × { } ) ∪ ( V × { } ).It remains to show that if G contains an odd cycle, then G is connected. We proceed intwo steps. First, consider some cycle v v . . . v n v of odd length n in the original graph G .Since n is odd, this cycle is replicated in G by the cycle( v ,
0) ( v , . . . ( v n ,
0) ( v ,
1) ( v , . . . ( v n ,
1) ( v , of length 2 n . (If n were even, we would get two disjoint cycles of length n instead.) Second,since G is connected, for any node u ∈ V there exists a path w w . . . w m in G such that w = u and w m = v . This path is replicated in G by the two paths( w ,
0) ( w , . . . ( w m , i ) and ( w ,
1) ( w , . . . ( w m , j ) , where ( i, j ) = (1 ,
0) if m is even, and ( i, j ) = (0 ,
1) if m is odd. This means that both ( u, u,
1) are connected to the aforementioned cycle of length 2 n , and since u was chosenarbitrarily, it follows that G is connected. (cid:74)(cid:73) Lemma 10.
For every ***f -automaton A with input alphabet Λ and every non-bipartite Λ -labeled graph G , A accepts G if and only if it accepts the Kronecker cover of G . Proof.
It suffices to prove the claim for **$f - and **Sf -automata, since **sf -automata canbe regarded as a special case of both. In the following, let A = ( M, Σ ) and G = ( V, E, λ ).Since G is non-bipartite (i.e., it contains a cycle of odd length), we know by Lemma 9 thatits Kronecker cover G = ( V , E , λ ) is connected and therefore qualifies as valid input for A .Let us begin with the case where A is synchronous, i.e., a **$f -automaton, and considerthe (unique) runs ρ and ρ of A on G and G , respectively. Recall that V = V × { , } . Forevery node v of G , its copies ( v,
0) and ( v,
1) in G have the same label as v and an equivalentmultiset of neighbors (i.e., all their neighbors are copies of v ’s neighbors). It is thus easy tosee by induction that in every round i ∈ N , ( v,
0) and ( v,
1) are in the same state in ρ as v isin ρ . Therefore, the i -th configuration of ρ is accepting if and only if the i -th configurationof ρ is accepting, and hence A accepts G precisely if it accepts G .We now turn to the case where A is a **Sf -automaton. Consider any schedule σ =( S , S , . . . ) ∈ (2 V ) ω that satisfies the constraints of the scheduler Σ . To prove the claim, itsuffices to show that there exists a schedule σ of G that also satisfies the constraints of Σ such that the run ρ of A on G scheduled by σ is accepting if and only if the run ρ of A on G scheduled by σ is accepting. Indeed, by the consistency condition, this implies that A accepts G if and only if it accepts G .We choose σ = ( S , S , . . . ) ∈ (2 V ) ω such that S t = S i × { } and S t +1 = S i × { } for all t ∈ N . That is, for every node v of G , if v is active at time t , then its copy ( v,
0) in G is active at time 2 t , and its copy ( v,
1) is active at time 2 t + 1. Note that since σ is weaklyfair, so is σ . Furthermore, the exclusivity of σ also carries over to σ (this is why we do notschedule ( v,
0) and ( v,
1) simultaneously). However, σ is not strongly fair in general, andtherefore the assumption that A is a ***f -automaton is essential.Now, since ( v,
0) and ( v,
1) are not connected, and since both have the same label as v and an equivalent multiset of neighbors, it is again easy to see by induction that the followingholds: at every even time 2 i , both copies are in the same state as v is at time i , while atevery odd time 2 i + 1, copy ( v,
0) is already in the same state as v at time i + 1, but copy( v,
1) is still in the state v had at time i . Here we rely on the fact that each selection S i is asingleton, which ensures that if v is active in G at time i , then no other node is active at thesame time. This means that ( v,
0) and ( v,
1) receive the same multiset of states from theirneighbors in G at times 2 i and 2 i + 1, respectively. Consequently, the (2 i )-th configurationof ρ is accepting if and only if the i -th configuration of ρ is accepting, and the (2 i + 1)-thconfiguration of ρ is accepting if and only if both the i -th and the ( i + 1)-th configurationsof ρ are accepting. Given that legal runs must eventually reach a stable consensus (i.e.,only accepting or only rejecting configurations after a certain time), this means that ρ isaccepting if and only if ρ is accepting. (cid:74) . Esparza and F. Reiter 23 (cid:73) Theorem 11.
For every **Sf -automaton there is an equivalent **sf -automaton.
Proof.
In the following, we show how, for a given **Sf -automaton A , we can construct anequivalent **$f -automaton B (i.e., a synchronous automaton). This is sufficient to prove theclaim because we know from Theorem 5 that B can always be simulated by a **sf -automatonusing a synchronizer.First of all, let us observe that the task would be straightforward if we were guaranteedthat the labels of the input graph define a proper vertex coloring. Indeed, since each color of aproper coloring represents an independent set, B could simply operate in cyclically repeatingphases that correspond to the different colors. More precisely, if the given colors were0 , . . . , k −
1, then in the i -th round (i.e., the i -th time all nodes change state synchronously),only the ( i mod k )-colored nodes would evaluate the transition function of the simulatedautomaton A . As explained in the first paragraph of the proof of Theorem 7, such a run isequivalent to a run of an exclusive scheduler that activates the nodes in each independentset one by one (in some arbitrary order).Obviously the above approach only works if we are given a proper coloring. Nevertheless,it can be adapted to a special case of uncolored graphs: if the input graph happens to bebipartite, then there exist exactly two possible 2-colorings. This is because as soon as wefix the color of a single node, there is only one possible choice of color for all the remainingnodes. However, choosing one of the two 2-colorings would require to break symmetry, whicha **$f -automaton cannot do. So instead, we simply work with both colorings in parallel.We now go into more details on how to simulate a **Sf -automaton A by a **$f -automaton B on bipartite graphs . Let M = ( Q, δ , δ, Y, N ) be the machine of A with inputalphabet Λ and counting bound β , and let { , } be a set of colors that we will use to colorthe graph. At any point in time in an execution of B , each node v stores a pair of states( q , q ) ∈ Q × Q , where q represents v ’s current state in case its color is 0, and similarly q represents v ’s current state in case its color is 1. This way, B can run the aforementionedround-based simulation of A for both possible 2-colorings in parallel. To simulate the casewhere v is 0-colored, v looks at the state in its own 0-component but at the states in itsneighbors’ 1-component (since the neighbors must be 1-colored if v is 0-colored). To simulatethe case where v is 1-colored, the procedure is the other way around.More formally, the machine of B can be defined as M = ( Q , δ , δ , Y , N ), where Q = Q × Q × { , } , Y = Y × Y × { , } , N = N × N × { , } ,δ ( a ) = (cid:0) δ ( a ) , δ ( a ) , (cid:1) for all a ∈ Λ , and the transition function δ is defined as follows, for q , q ∈ Q and P ∈ [ β ] Q : δ (cid:0) ( q , q , , P (cid:1) = (cid:0) δ ( q , P ) , q , (cid:1) ,δ (cid:0) ( q , q , , P (cid:1) = (cid:0) q , δ ( q , P ) , (cid:1) , where P and P are the β -bounded projections of P to the two first state components, i.e., P : p min (cid:8) β, P p ∈ Q, i ∈{ , } P ( p, p , i ) (cid:9) ,P : p min (cid:8) β, P p ∈ Q, i ∈{ , } P ( p , p, i ) (cid:9) , for all p ∈ Q . The third state component counts the number of synchronous rounds modulo 2.If the round number is even, each node behaves as if it were 0-colored and its neighbors were1-colored. Thus, each node updates its 0-component according to its neighbors’ 1-components.Meanwhile, the 1-component remains unchanged because 1-colored nodes are supposed to remain passive in even rounds. If the round number is odd, everything is the other wayaround.The above construction of B is based on the assumption that the input graph is bipartite.However, we now argue that in fact this assumption is not necessary. To do so, we have todistinguish two cases:If the input graph G is bipartite, then by construction, the synchronous run of B on G simulates in parallel two runs of A on G with exclusive selection. By the consistencycondition, this implies that G is accepted by B if and only if it is accepted by A .If G is not bipartite, then by Lemma 9, its Kronecker cover G is connected and thereforeconstitutes a legal input for a distributed automaton. Now, by Lemma 10, B accepts G if and only if it accepts G . Since G is bipartite (by the definition of a Kroneckercover), we know from the above discussion that B accepts G if and only if A accepts G .Finally, again by Lemma 10, A accepts G if and only if it accepts G . From this chain ofequivalences, we can conclude that G is accepted by B if and only if it is accepted by A .Notice that in the case where the input graph is not bipartite, B simulates A on the Kroneckercover G instead of the actual graph G . So in some sense, our construction only performsa “pseudo simulation”, where the simulated run may not correspond to any possible runon G . Nevertheless, this is sufficient because ***f -automata cannot distinguish between G and G . (cid:74) A.6 Proofs of Section 6 (cid:73)
Proposition 14. S is recognizable by a dAsF -automaton and by a Dasf -automaton.
Proof.
We first present a dAsF -automaton that recognizes S . The states of the automaton arepairs ( d, c ), where d ∈ { leaf , center , unknown , neither } is the estimate of v , and c ∈ { , } isits color . The accepting states are those with estimate leaf or center , and the rejecting statesare those with estimate unknown or neither . Initially all nodes are in state ( unknown , d, c ) be the current state of a node v , and let NE ( v ) denote the current set of estimates ofthe neighbors of v . If v is selected by the scheduler, then it moves to the state ( d , c ), where c = 1 − c , and d is given by: (a) If neither ∈ NE ( v ), then d = neither . (b) If neither / ∈ NE ( v ), d = unknown , center / ∈ NE ( v ), and at least two neighbors of v havedifferent colors, then d = center . (c) If neither / ∈ NE ( v ), d = unknown , center ∈ NE ( v ), and at least two neighbors of v havedifferent colors, then d = neither . (d) If neither / ∈ NE ( v ), d = unknown , NE ( v ) = { center } , and all neighbors of v have thesame color, then d = leaf . (e) If neither / ∈ NE ( v ), d = center , and center ∈ NE ( v ), then d = neither . (f) If neither / ∈ NE ( v ), d = leaf , and at least two neighbors of v have different colors, then d = neither . (g) Otherwise d = d .Assume that G is not a star. If it consists of exactly two nodes connected by an edge,then it is easy to see that the estimate of both nodes remains forever unknown , so G isrejected. Otherwise, G contains at least one edge { u, v } such that both u and v have degreeat least 2. We show that eventually at least one of u and v reaches estimate neither . By (a),every node eventually reaches estimate neither , and so G is rejected. . Esparza and F. Reiter 25 First we claim that both u and v eventually reach states with estimate center or neither .This is the point at which we make crucial use of strong fairness: by Lemma 2, it ensuresthat v is eventually selected in a configuration in which at least two neighbors of v havedifferent colors . If in this configuration v has estimate unknown , then v moves either to neither (cases (a) and (c)) or center (case (b)), and if it has estimate leaf , then v moves to neither (cases (a) and (f)). The same holds for u , and so the claim is proved.By the claim, at least one of u and v eventually reaches estimate neither , in which casewe are done, or both eventually reach center ; in this case, the next time one of the two isselected it moves to neither (case (e)), and we are also done.Assume now that G is a star. We show that every node ends up with estimate leaf or center . Since leaves have only one neighbor, cases (b), (c), and (f) never apply, andso they can never reach estimate center . This implies that case (e) also never applies forleaves. Further, as long as the center has estimate unknown , all leaves remain in unknown ,because (a) and (d) do not apply. It follows that the center also remains in unknown untilit is selected in a configuration in which at least two neighbors have different colors, whicheventually happens by strong fairness; at that moment it moves to center (case (b)). Since(e) never applies, the center maintains the estimate center forever. Once the center hasreached estimate center , whenever a leaf is selected it changes its estimate to leaf (case (d)).After that, no other rule than (g) ever applies, and so the leaf maintains estimate leaf forever.This concludes the proof of the first part of the proposition.For the second part we present a Dasf -automaton with counting bound β = 2 thatrecognizes S . We only sketch the automaton, since the ability to count makes the task ofrecognizing S easy. Recall that β = 2 means that for each state q a node can detect if it haszero, exactly one, or at least two neighbors in q .The states of the automaton are { init , leaf , non-leaf , accept , reject } . The yes and no statesare accept and reject , respectivelyInitially all nodes are in state init . Let v be a node. Observe that, since the automatoncan count, a selected node can directly observe if it is a leaf or not. When v is selected: (a) If v has only one neighbor, then (a.1) if the neighbor is in state init or non-leaf , v moves to leaf ; (a.2) if the neighbor is in state leaf or reject , v moves to reject ; and (a.3) if the neighbor is in state accept , v moves to state accept . (b) If v has more than one neighbor, then (b.1) if at least one neighbor is in state reject or non-leaf , v moves to reject ; (b.2) else if at least one neighbor is in state init , v moves to non-leaf ; (b.3) else (all neighbors in states leaf or accept ), v moves to accept .Assume G is a star. By (a.1) and (b.2), a node can only reach state leaf ( non-leaf ) ifit really is a leaf (non-leaf) of G . This fact, together with an inspection of (a.2) and (b.1),shows that a node can only reach reject if G is not a star. Further inspection of (a.3) and(b.3) shows that it can only reach state accept if G is a star. So it only remains to provethat every node eventually reaches accept or reject . By (a.2) and (a.3) it suffices to showthat eventually some node reaches accept or reject . If all nodes are leaves, then there areat most two nodes, and by (a.2) they eventually move to reject . Assume now that there isat least one non-leaf. By (a), (b), and weak fairness, eventually all nodes leave state init ,and so all non-leaves are in one of non-leaf , accept , or reject . If at least one non-leaf is in accept or reject , we are done. Otherwise, if G is a star, then by (b.3) the (unique) non-leafeventually moves to accept ; if G is not a star, then two neighbors are in state non-leaf , andby (b.1) the next time any of them is selected it moves to reject . (cid:74) (cid:73) Proposition 15. C is recognizable by a dAsF -automaton, but neither by DA*f -automatanor by
Da*F -automata.
Proof. (a) C is recognizable by a dAsF -automaton.We sketch the behavior of a dAsF -automaton for C . Recall that the nodes of the cycle C arelabeled by 0, 1, and 2. First, if a node with label i detects that it has more than two neighbors,or that the set of labels of its neighbors is different from { ( i −
1) mod 3 , ( i + 1) mod 3 } , thenthe node moves to a rejecting state. Nodes with a neighbor in a rejecting state also move toa rejecting state. To detect that a node has more than two neighbors, the automaton usesthe same trick as in Proposition 14: the state of each node has a color component with threepossible values, which changes whenever the node is active. By strong fairness and Lemma 2,if the node has more than two neighbors, then it will eventually see that its neighbors havethree different colors, and reject.As we consider only connected graphs, the preceding tests ensure that graphs which arenot cycles with cyclic labeling 0 − − b ∈ { , } , the node asks its neighbor labeledby b to propagate a signal through the cycle, and then waits until a signal arrives. (For this,the node moves to a state indicating that it wants the signal to be propagated, and waitsfor the neighbor to reach a state indicating it has received the message.) If the next signalarrives through the (1 − b ) neighbor, the node moves to phase (1 − b ); if it arrives through the b neighbor, the node moves to a rejecting state. If the cycle contains only one node labeledby 2, then every signal sent through one neighbor arrives through the other. However, ifthe cycle contains at least two nodes labeled by 2, then by strong fairness, eventually twoconsecutive 2-nodes send a clockwise and a counterclockwise signal, and so eventually a2-node sends a signal through a node, receives the next signal through the same node, andmoves to the rejecting state.(b) C is not recognizable by DA*f -automata.Let C be the hexagon whose nodes are labeled by 0 − − − − − DA*f -automaton A that accepts C also accepts C . For this, consider thesynchronous schedules σ and σ of A on C and C . Observe that σ and σ are weaklyfair, and so the runs ρ = ( C , , C , · · · ) and ρ = ( C , , C , · · · ) scheduled by them arefair too. By the consistency condition, ρ is accepting. Let v, v be nodes of C and C ,respectively, carrying the same label. It is easy to see that C ,t ( v ) = C ,t ( v ) for every time t ≥
0. So ρ is also accepting, and thus, by the consistency condition, A accepts C .(c) C is not recognizable by Da*F -automata.We proceed as in part (b): we show that every
Da*F -automaton A that accepts C alsoaccepts C . Let σ = ( S , , S , , . . . ) be a strongly fair schedule of A on C , and let ρ = ( C , , C , · · · ) be the run scheduled by it. Since C is accepted, ρ is accepting, and sothere is a configuration C ,t in which every agent is in an accepting state.For every 1 ≤ t ≤ t , let S ,t be the selection that for every label ‘ = 0 , , C labeled by ‘ iff S ,t contains the node of C labeled by ‘ (looselyspeaking, S ,t “duplicates” S ,t ). Let σ be the result of choosing an arbitrary strongly fairschedule ( S , , S , , . . . ) of A on C , and replacing S , , . . . , S ,t by S , , . . . , S ,t . Since σ satisfies the definition of strong fairness, the run ρ = ( C , , C , · · · ) scheduled by it is alsostrongly fair.Let v, v be nodes of C and C , respectively, carrying the same label. By the definition . Esparza and F. Reiter 27 of the selection S ,t for 1 ≤ t ≤ t , we have C ,t ( v ) = C ,t ( v ). So, in particular, everynode of C ,t ( v ) is in an accepting state. Since A is a halting automaton, nodes that haveaccepted can no longer change their state, so ρ is accepting, and therefore A accepts C . (cid:74)(cid:73) Proposition 16. S even is recognizable by a DasF -automaton.
Proof.
In Proposition 14 we have exhibited a
Dasf -automaton A recognizing S . We nowgive a DaSF -automaton B with β = 2 that uses counting, exclusivity, and strong fairness tofurther decide if the number of leaves is even. Loosely speaking, B first executes A ; if A rejects, then B rejects, because the graph is not even a star. If A accepts, then B entersa new phase during which it counts the number of leaves modulo 2. By Theorem 7, B isequivalent to a DasF -automaton.We can assume that when A accepts, all nodes are labeled with either leaf or center (the unique non-leaf). We first give an informal description of B . Leaves can be in states visible , invisible , dead , even , or odd . Intuitively, while leaves have not been counted by thecenter, they alternate between the states visible and invisible . The center only increments itsmodulo-2 counter if exactly one leaf is visible . After a leaf is counted, it moves to dead . Whenall leaves become dead, i.e., when they have all been counted, the center decides whetherto accept or reject; the leaves read the decision from the counter, and move to even or odd accordingly.Formally, the state of a leaf is one out of { visible , invisible , dead , even , odd } , where even is accepting, and odd is rejecting. Initially all leaves are invisible. The states of the centerare of the form ( ph, p, d ) ∈ { , , } × { , } × { none , , } , where ph is the phase, p the parity, and d the decision, respectively. The initial state is(0 , , none ), and the accepting and rejecting states are those with decision 0 and 1, respectively.The transition function is as follows. Let v be a node selected by the scheduler.If v is a leaf, and its current state is s , then:If s = invisible ( visible ) and the center is in phase 0, then v moves to visible ( invisible ).Intuitively, while the center is in phase 0, v keeps making itself visible and invisible tothe center. By Lemma 2, strong fairness guarantees that eventually exactly one leafwill be visible to the center.If s = visible and the center is in phase 1, then v moves to dead .Intuitively, v knows that it has been counted by the center, and dies.If s = dead and the center is in phase 2, then v moves to even or odd , depending onthe decision made by the center.Otherwise v remains in state s .If v is the center, and its current state is α = ( ph, p, d ), then v changes its state as follows:If exactly one leaf is visible and ph = 0, then the center moves to α [ ph → , p → − p ].(Where α [ ph → , p → − p ] denotes the result of substituting 1 for ph and 1 − p for p in α .) Intuitively, the center counts the visible leaf. Since the scheduler is exclusive,no other leaf can change its visibility status at the same time as the center performsthis operation. This guarantees that multiple leaves are not counted as one, and thatthe unique counted leaf remains visible.If all leaves are invisible or dead, at least one leaf is invisible, and ph = 1, then thecenter moves to α [ ph → If all leaves are dead and ph = 1, then the center moves to α [ ph → , d → p ].Intuitively, the counting is done, and the center takes the current parity as the decision.Otherwise the center remains in state α .In every strongly fair run, eventually the center is selected in a configuration in whichexactly one leaf, say v is visible. This is detected by the center, which updates its counterand moves to phase 1. The center stays in phase 1 until it sees that all leaves are invisible ordead, which guarantees that v knows it has been counted and died. The center then movesto phase 0 again, to count the next leaf. When all leaves have been counted (which thecenter can detect by observing that they are all dead), the center knows that its parity bit isthe correct one, and moves to phase 2. By fairness, all leaves eventually read the result fromthe center, and move to even or odd .Notice how the use of an exclusive scheduler simplifies our design. Indeed, the distributedmachine described above would not be correct under a liberal scheduler, because the centercould be deceived as follows. Let u be the center, and let v and v be two leaves. Supposethat u is in phase 0 and v is the only visible leaf. Next, u and v are selected simultaneously ,so u moves to phase 1 and increments its counter by 1 (as it sees exactly one visible leaf),while v becomes visible (as it sees the center in phase 0). Now both v and v will die (asthey are visible and u is in phase 1), but only v has been counted. In order to avoid suchproblems, we could introduce an additional verification phase in which the center checksthat it has counted exactly one leaf, but this would make the protocol more complicated. Soinstead, we first take exclusivity for granted, and then implement it using the constructionof Theorem 7. (cid:74)(cid:73) Proposition 18. S even is not recognizable by dA*F -automata. Proof.
For the sake of obtaining a contradiction, let us assume that there exists a dAsF -automaton A with machine M = ( Q, δ , δ, Y, N ) that recognizes S even . We must firstintroduce several concepts related to M before we can get to the actual contradictionargument.Without loss of generality, we assume that the language of star graphs is S = { ST i | i ≥ } ,where ST i is the unlabeled graph with nodes { r, l , . . . , l i } and edges { r, l } , . . . , { r, l i } . Wecall r the root and l , . . . , l i the leaves of the star. Throughout this proof, we consider onlyconfigurations of M whose underlying graph is ST i for some i ≥
2, and call them starconfigurations . For notational simplicity, we sometimes identify a star configuration with atuple C = ( q, f ), where q ∈ Q is the state of r and f : Q → N is a function that assigns toeach state p the number of leaves of ST i that are in state p . We denote the total number ofnodes of C by card( C ), i.e., card( C ) = 1 + P p ∈ Q f ( p ). Clearly, a configuration C of ST i satisfies card( C ) = i + 1.A base configuration is a star configuration in which every state p ∈ Q occurs at most onceon a leaf node. We write Base for the set of all base configurations, i.e.,
Base = Q × { , } Q .The base configuration associated with C = ( q, f ) is the configuration base( C ) = ( q, f ) suchthat f ( p ) = min { f ( p ) , } for all p ∈ Q . Intuitively, base( C ) is the smallest star configurationin which the root sees the same set of states as in C .Given two configurations C = ( q, f ) and C = ( q , f ), we let C (cid:22) C denote that q = q , f ( p ) ≤ f ( p ) for all p ∈ Q , and f ( p ) = 0 if and only if f ( p ) = 0. Observe that (cid:22) is a partialorder. The upward closure of C is the set d C e := { C | C (cid:23) C } . In other words, d C e is theset of configurations that one can obtain by duplicating some leaves of C . Notice that theroot of such a configuration also sees the same set of states as in C . . Esparza and F. Reiter 29 The successor relation on configurations of M will be denoted by → . That is, for twoconfigurations C and D , we write C → D if and only if C can reach D in a single executionstep of M . (This means that there exists a selection S of C ’s underlying graph such thatone obtains D by evaluating M ’s transition function δ at the nodes of C selected by S .) Welift this relation to sets of configurations C and D in a rather natural way, writing C → D ifand only if for every C ∈ C there exists some D ∈ D such that C → D . Furthermore, weuse the standard notation → ∗ for the reflexive-transitive closure of → , and → i for the i -foldcomposition of → with itself, where i ∈ N . Claim 1. If C → ∗ D , then d C e → ∗ d D e .Proceeding by induction over i ∈ N , we show that C → i D implies d C e → i d D e . The case i = 0 is trivial, since C → D means that C = D .For i = 1, we observe that for every configuration C ∈ d C e , the roots of C and C canbehave identically (as they see the same set of states), and if C has more leaves than C , thenthe additional leaves can copy the behavior of their indistinguishable siblings. So C → D implies that there is some D ∈ d D e such that C → D . More precisely, let ST , ST ∈ S be the underlying graphs of C and C , respectively. Since C (cid:23) C , we know that the set ofleaves of ST is a superset of the set of leaves of ST . Let S be the selection of ST underlyingthe step C → D . We now define the selection S of ST as follows:The root r belongs to S if and only if it belongs to S .For every state q : if S does not select any leaves in state q , then neither does S ; otherwise, S selects all leaves in state q selected by S , plus all other leaves in state q that do notbelong to ST .It follows that S ⊇ S , and moreover a leaf of ST is selected in S only if some leaf of ST inthe same state is selected in S . So a node of S can only move to a state, say q , if some nodeof S also moves to q . Letting D be the configuration reached by selecting S , this implies D ∈ d D e , and thus d C e → d D e .For i ≥
2, the premise C → i D tells us that there exists a configuration E such that C → E → i − D . By the induction hypothesis, this implies d C e → d E e → i − d D e , andtherefore d C e → i d D e . (cid:3) As a direct consequence of Claim 1 we obtain:
Claim 2. If { C } → ∗ d D e , then d C e → ∗ d D e .Indeed, { C } → ∗ d D e means that there is some D (cid:23) D such that C → ∗ D . By Claim 1,it follows that d C e → ∗ d D e . Moreover, D (cid:23) D implies d D e ⊆ d D e . Therefore we get d C e → ∗ d D e . (cid:3) Claim 2 provides the motivation for the last notion we need to introduce: if we want torepresent the set
Pre ∗ ( d D e ) of predecessors of d D e (i.e., the configurations from which onecan reach a configuration of d D e in zero or more steps), and if C, C ∈ Pre ∗ ( d D e ) such that C ≺ C , then the representation of Pre ∗ ( d D e ) does not need to mention C explicitly, since C ∈ Pre ∗ ( d D e ) already implies C ∈ Pre ∗ ( d D e ). This leads us to represent Pre ∗ ( d D e ) by itsset of minimal elements with respect to (cid:22) . Formally, we define MinPre ∗ ( d D e ) to be the setof all configurations C such that { C } → ∗ d D e and there exists no configuration C ≺ C suchthat { C } → ∗ d D e . Claim 3.
For every star configuration D , the set MinPre ? ( d D e ) is finite.Since there are only finitely many base configurations, and every star configuration lies inthe upward closure of its base configuration, it suffices to show that MinPre ? ( d D e ) ∩ d C e isfinite for all C ∈ Base . This follows easily from Dickson’s Lemma, which states that for everyinfinite sequence ~v , ~v , . . . of vectors of N k , there exist two indices i < j such that ~v i ≤ ~v j with respect to the pointwise partial order on vectors. Indeed, assume MinPre ? ( d D e ) ∩ d C e is infinite, and let C , C , . . . be an enumeration of its elements, where C i = ( q, f i ). ByDickson’s Lemma, there are i < j such that f i ( p ) ≤ f j ( p ) for all p ∈ Q . This implies C i (cid:22) C j ,and thus contradicts the minimality of C j . (cid:3) With all these notions in place, we can finally come back to the contradiction argumentthat proves Proposition 18. Let m be the maximum cardinality of any configuration that liesin the set MinPre ? ( d D e ) of some base configuration D , i.e., m := max (cid:8) card( C ) (cid:12)(cid:12) there exists D ∈ Base such that C ∈ MinPre ? ( d D e ) (cid:9) . Observe that m is well-defined because Base is finite by definition, and
MinPre ? ( d D e ) isfinite by Claim 3.Now consider a star ST n whose number of leaves n is chosen such that n is even and n ≥ ( m · | Q | ), where | Q | is the number of states of A . Let ρ = ( C , C , . . . ) be a fair runof A on ST n . Since n is even, ρ is accepting, which means that there is a time r ∈ N suchthat for every r ≥ r , the configuration C r is accepting. Moreover, since the total number ofconfigurations of A on G is finite, there is s ≥ r such that the (accepting) configuration C s is visited infinitely often in ρ . Since A is strongly fair, no rejecting configuration is reachablefrom C s , because otherwise, by Lemma 2, ρ must visit that configuration. Let C s = ( q, f ),and let p max be a state that occurs maximally often at a leaf node of C s , i.e., f ( p max ) ≥ f ( p )for all p ∈ Q .Based on ρ , we construct a fair run ρ = ( C , C , . . . ) of A on the star ST n +1 such thatthe first s + 1 configurations ( C , . . . , C s ) copy the behavior of ρ . More precisely, the leaves l , . . . , l n behave exactly as in ρ . For the leaf l n +1 , let l i be any of the leaves of ST n suchthat C s ( l i ) = p max . During the first s steps, the schedule of ρ selects l n +1 if and only ifthe schedule of ρ selects l i . It follows that l n +1 visits the same sequence of states as l i , andso C s ( l n +1 ) = p max . Note that this construction does not contradict the strong fairnessconstraint because we only fix a finite prefix of ρ . We now extend ρ in such a way that itsatisfies the strong fairness constraint.Since n + 1 is odd, the run ρ must eventually visit only rejecting configurations. Inparticular, some rejecting configuration C t is reachable from C s , and so C s (cid:23) D for some D ∈ MinPre ? ( d base( C t ) e ). Claim 4. C s (cid:23) D .Recall that C s = ( q, f ), and let C s = ( q, f ) and D = ( q, g ). We have to show that f ( p ) ≥ g ( p )for every state p ∈ Q . To do so, we distinguish two cases:If p = p max , then by the definition of C s , we have f ( p ) = f ( p ), and since C s (cid:23) D , itfollows immediately that f ( p ) ≥ g ( p ).If p = p max , then by the pigeonhole principle and the definitions of n and p max , we have f ( p ) ≥ n/ | Q | ≥ m . Moreover, we have g ( p ) ≤ m because the definition of m ensures thatcard( D ) ≤ m . Hence, f ( p ) ≥ g ( p ). (cid:3) Since D ∈ MinPre ? ( d base( C t ) e ), Claim 4 tells us that C s can also reach some rejectingconfiguration in d base( C t ) e . This contradicts what we have established above. We thereforeconclude that dAsF -automata cannot recognize S even , and by Theorem 7, the same holds for dASF -automata. (cid:74) A.7 Proofs of Section 7 (cid:73)
Proposition 20.
For every graph population protocol there is an equivalent
DA*F -automaton. . Esparza and F. Reiter 31
Proof.
We present a simulation that runs a graph population protocol on a distributedautomaton. To this end, the automaton has to simulate a scheduler that selects orderedpairs of adjacent nodes instead of arbitrary sets of nodes. For any pair ( u, v ) that is selectedto perform a transition, let us call u the initiator and v the responder of the transition. ByTheorem 7, we may assume that the automaton’s scheduler selects a single node in each step.The main idea of the construction is as follows: When a node u is selected and sees thatit can become the initiator of a transition, it declares its intention to do so by raising theflag “?”. Then u waits until some neighbor v is selected and raises the flag “!”, which signalsthat v wants to become the responder of a transition. If this happens, the next time u isselected, it computes its new state according to the state of v and the transition functionof the population protocol, but also keeps its old state in memory so that v can still see it.After that, v also updates its state, and finally u deletes its old state, which completes thetransition. Throughout this protocol, the nodes verify that they have exactly one partnerduring each transition. If this condition is violated, they raise the error flag “ ⊥ ” and aborttheir current transition.Formally, let Π = ( Q, δ , δ, Y, N ) be a population protocol on Λ -labeled graphs. Weconstruct the DASF -automaton A with machine M = ( Q , δ , δ , Y , N ), where Q = Q ∪ ( Q × { ? , ! , ⊥} ) ∪ Q , the sets Y and N are defined analogously, δ ( a ) = δ ( a ) for all a ∈ Λ , and δ is defined asfollows. Let v be the node currently selected by the scheduler. In case v is in state q ∈ Q : a. if all of v ’s neighbors are in states of Q , then v moves to ( q, ?); b. if exactly one of v ’s neighbors is in some state of Q × { ? } and all others are in statesof Q , then v moves to ( q, !); c. if several of v ’s neighbors are in states of Q × { ? } , then v moves to ( q, ⊥ ); d. otherwise, v remains in state q .Intuitively, in rule 1a, v makes a request for a transition partner, in rule 1b, v acceptsthe request of some other node, and in rule 1c, v signals an error because it has receivedmultiple requests. Signaling the error is necessary to guarantee that two requesting nodeswith a common neighbor do not end up in a deadlock. In rule 1d, v simply waits forongoing transitions in its neighborhood to be completed. In case v is in state ( q, ?): a. if all of v ’s neighbors are in states of Q , then v remains in ( q, ?); b. if exactly one of v ’s neighbors is in a state of the form ( p, !) and all others are in statesof Q , then v moves to ( q, δ ( q, p )); c. otherwise, v moves to ( q, ⊥ ).Intuitively, in rule 2a, v waits for some node to accept its request, in rule 2b, v initiates atransition of Π with the unique responder that has accepted its request, and in rule 2c, v aborts its attempt to make a transition. The latter happens either if some neighborof v has received multiple requests, or if several nodes have accepted v ’s request (in whichcase v ’s new state informs those nodes of the error). In case v is in state ( q, !): a. if exactly one of v ’s neighbors is in some state of Q × { ? } and all others are in statesof Q , then v remains in ( q, !); b. if exactly one of v ’s neighbors is in a state of the form ( p, p ) and all others are instates of Q , then v moves to δ ( p, q ); c. otherwise, v moves to state q .Intuitively, in rule 3a, v waits for its potential transition partner to initiate the transition,in rule 3b, v performs its own part of the transition, and in rule 3c, v aborts the transitionattempt. The latter happens if the initiator of the transition signals an error. In case v is in state ( q, ⊥ ): a. if some neighbor of v is in a state of Q × { ? , ! } , then v remains in ( q, ⊥ ); b. otherwise, v moves to state q .Intuitively, in rule 4a, v waits for its affected neighbors to see that an error has occurred,and in rule 4b, v returns to the state it had before the last failed transition attempt. In case v is in state ( q, q ) ∈ Q : a. if some neighbor of v is in a state of Q × { ! } , then v remains in ( q, q ); b. otherwise, v moves to state q .Intuitively, in rule 5a, v waits for its transition responder to perform its part of thetransition; to make this possible, v must still keep its old state q in memory. In rule 5b,the transition has been completed, so v can remove its old state.By Lemma 2, strong fairness guarantees that every ordered pair of nodes will be ableto perform a transition infinitely often, and more generally, every finite sequence of pairswill be selected infinitely often by the simulated scheduler. Moreover, if several pairs maketransitions simultaneously, the construction ensures that none of these pairs have a nodein common. This means that the outcome of the transitions would not change if they wererescheduled sequentially. Hence, every fair run of automaton A simulates a fair run ofpopulation protocol Π , and since Π satisfies the consistency condition, so does A . Thereforethe two devices are equivalent.Notice that the above construction relies on the fact that A is a DASF -automaton: nodesmust be able to count to verify that they have exactly one partner during each transition;acceptance by stable consensus and strong fairness are required to match the way populationprotocols are executed; and just as in the proof of Proposition 16, exclusive selection is usedto simplify the design of the automaton. In particular, when a responder accepts the requestof an initiator (rule 1b), it is guaranteed that none of its other neighbors make a new requestat the same time. Similarly, when a node initiates a transition with a responder (rule 2b), itcan be sure that its request is not simultaneously accepted by another node. (cid:74)
A.8 Proofs of Section 8 (cid:73)
Proposition 22.
For every
DA*F -automaton A and every k ∈ N there is a dA*F -automaton B equivalent to A on graphs of maximum degree k . Proof.
Given a
DA*F -automaton A , we have to describe a dA*F -automaton B such that forevery graph G of maximum degree k , every fair run of B on G simulates some fair run of A on G . Observe that this is enough to prove that A and B are equivalent on graphs ofmaximum degree k . Indeed, since by assumption A satisfies the consistency condition, eitherall fair runs of A on G are accepting, or all are rejecting. If every fair run of B on G simulatessome fair run of A on G , then B also satisfies the consistency condition and accepts G iff A accepts G .In the following, we construct a dAsF -automaton B that simulates a DAsF -automaton A on any graph of maximum degree k . (The same construction can also be used to go from DASF -automata to dASF -automata.) . Esparza and F. Reiter 33
Let Q be the set of states of A . A state of B is a fivetuple α = ( q , q, p, fc , sc ), where q , q ∈ Q are the initial and current state , respectively, p ∈ { , , } is the phase , and fc ∈ [ k ]is the first color , and sc ∈ { , } is the second color , respectively.Let G = ( V, E, λ ) be a graph of maximum degree k . The initial state of a node v of G in B is ( q , q , , , q = δ ( λ ( v )) and δ is the initialization function of A . Letus now give a more precise but still intuitive description of the intended meaning of “anode v of a graph G is currently in state α = ( q , q, p, fc , sc )”. The first two components arestraightforward: q is always δ ( λ ( v )). (That is, the transition function of B , introduced below, neverchanges the first component of a state.) Sometimes the node needs to go back to itsinitial state, and this component just tells the node where to go. q is the current state of v in the run of A being simulated.The other three components require some further explanation. Given a node v , let NE ( v )be the set containing v and its neighbors. We say that a configuration C is well colored if forevery node v the first colors of v and all its neighbors are pairwise distinct in C (i.e., eachfirst color occurs at most once in v ’s neighborhood). A goal of the protocol is to eventuallyreach a well-colored configuration C wc such that from then on no node ever changes its firstcolor. Intuitively, the first color of a node at C wc becomes its locally unique identity : anidentifier that never changes, different from the identities of all its neighbors and neighbors’neighbors. With locally unique identities the nodes can then easily simulate the moves of A :Indeed, in order to know how many neighbors they have in a state of A , say q , they justcount the number of different states they see of the form ( q , q , p, fc , sc ).To achieve this goal, the protocol uses the second colors. In phase 0 the nodes restarttheir states (initially this is superfluous because they are already there), and move tophase 1. In phase 1, the nodes select an arbitrary distribution of first colors. Since thenodes are deterministic, they rely on strong fairness to ensure that eventually a well-coloreddistribution is chosen. The nodes then move to phase 2, where they start simulating A underthe assumption that the current configuration is well colored. However, at the same time theykeep changing their second colors, and start to watch out for neighbors with the same firstcolor as themselves, and for pairs of neighbors with the same first color but distinct secondcolors. Whenever they detect one of these two situations, they know that their assumptionwas incorrect, which implies that the simulation they have carried out so far is useless. Sothey move back to phase 0. We recall that, as in some other proofs, the nodes do not movesynchronously from phase to phase; instead, a node moves to a new phase, and waits for itsneighbors to follow.Let us now describe the transition function of B . Let C denote the current configurationof B . Fix a node v of G , and let α = ( q , q, p, fc , sc ) be the current state of v in C . Further,let q be the state v would move to in machine A from the configuration of A correspondingto C . Finally, let ( fc + 1) denote ( fc + 1) mod ( k + 1), and ( p + 1) and ( p −
1) denote( p + 1) mod 3 and ( p −
1) mod 3, respectively. If v is selected by the scheduler at C , thenits next state is determined as follows: (0) If v is in phase 0 then: (0.a) If some neighbor of v is in phase 2, then v stays in α . (0.b) If all neighbors of v are in phase 0 or 1, then v moves to α [ q → q , p → (1) If v is in phase 1 then: (1.a) If at least one neighbor of v is in phase 0, then v moves to α [ fc → fc + 1];(Intuitively, v waits for its neighbors in phase 0 to catch up.) (1.b) If all neighbors of v are in phase 1, then v moves to α [ p → , fc → fc + 1];(The node initiates a new phase.) (1.c) If at least one neighbor of v is in phase 2, then v moves to α [ p → (2) If v is in phase 2 then: (2.a) If some neighbor of v is in phase 1, then v moves to α [ fc → fc + 1]. (2.b) If all neighbors of v are in phase 2, and any two nodes of NE ( v ) with the same firstcolor also have the same second color, then v moves to α [ q → q , sc → − sc ].(In this case v sees no local violation of the well-coloring condition, and so it simulatesa move of A , and changes its second color.) (2.c) If all neighbors of v are in phase 2, and NE ( v ) contains two nodes with the samefirst color but distinct second colors, then v moves to α [ p → (2.d) If some neighbor of v is in phase 0, then v moves to α [ p → B . In the rest of the proof we show that B is adistributed automaton, i.e., that it satisfies the consistency condition, and that every fairrun of B on G simulates some fair run of A on G . The proof is in four steps. Claim 1.
Every run of B eventually reaches a well-colored configuration with all nodes inphase 2.By strong fairness and Lemma 2, it suffices to show that for every configuration there existsa finite sequence of selections such that the configuration reached after executing them is wellcolored with all nodes in phase 2. First we show that it is possible to color the nodes of G with at most k + 1 different colors so that the colors of every set of nodes NE ( v ) are pairwisedistinct. Let G be the result of triangulating G , i.e., adding an edge { v , v } for every pairof edges { v , v } , { v , v } ∈ G such that v = v . Since G has maximum degree k , the graph G has maximum degree at most k . Clearly, a coloring of G in the usual graph-theoreticalsense (i.e., for every edge { v , v } of G the nodes v and v have different colors) satisfiesthat the colors of every set NE ( v ) in G are pairwise distinct. So it suffices to exhibit acoloring of G with k + 1 colors. Such a coloring can be obtained by applying the standardgreedy algorithm that produces a coloring of a graph with maximum degree m using m + 1colors (in our case m = k ).We prove the existence of a reachable well-colored configuration with all nodes in phase 2in two steps: (1) Every reachable configuration can reach either a well-colored configuration with all nodesin phase 2, or a configuration with all nodes in phase 0.Let C be a reachable configuration. Inspection of (0)-(2) shows that from C we canreach C with all nodes in phase 2. If C is well colored we are done. Otherwise, there isa node v such that two nodes of NE ( v ) have the same first color in C . If these nodeshave distinct second colors, we can select v and bring it to phase 0 with (2.c), and then(2.d) yields the result. If the nodes have the same second colors, we select one of them. If(2.b) applies, then its second color changes, and we can select v as before. If (2.c) applies,then this node moves to phase 0, and then (2.d) yields the result. (2) Every configuration with all nodes in phase 0 can reach a well-colored configuration withall nodes in phase 2.Take a spanning tree T of G . Starting with T := T , repeatedly select a leaf v of T asmany times as necessary to give it any first color we wish (this is possible by (0.b) and(1.a)); we then remove v from T and iterate. When T consists of just one node, weproceed similarly, but using (1.b) and (2.a). This yields a well-colored configuration withone node in phase 2 and all others in phase 1. We repeatedly select nodes in phase 1 witha neighbor in phase 2 and apply (1.c). (cid:3) . Esparza and F. Reiter 35 Claim 2.
The set of well-colored configurations with all nodes in phase 2 is closed underthe transition relation.In such configurations only (2.b) is enabled, which changes neither the phase nor the firstcolor of a node. So after any transition the new configuration is also well-colored, and allnodes stay in phase 2. (cid:3)
Let us now prove that B satisfies the consistency condition, and that it is equivalent to A on graphs of maximum degree k . Let ρ B = ( C B , C B , C B , . . . ) be an arbitrary strongly fairrun of B on G . It suffices to show that there exists a strongly fair run ρ A of A on G suchthat ρ B is accepting iff ρ A is accepting. Indeed, since A satisfies the consistency conditionby hypothesis, it follows that B is also consistent, and that B accepts G iff A does, whichimplies the equivalence of A and B on k -bounded graphs.Let σ B = ( S B , S B , S B , . . . ) ∈ (2 V ) ω be a schedule that schedules ρ B . We now define aschedule σ A = ( S A , S A , S A , . . . ), and then choose ρ A as the run scheduled by σ A . For everynode v , let t v be the smallest time after which v and its neighbors reach phase 2 and stay init forever (in run ρ B ), which exists by Claims 1 and 2. For every t ∈ N , we decide whether v ∈ S At or not as follows:If t ≤ t v , then v / ∈ S At ; if t > t v , then v ∈ S At iff v ∈ S Bt .So, intuitively, in σ A a node v is never selected before NE ( v ) has “stabilized”, and after thatit is selected whenever σ B selects it. It remains to show that ρ A is strongly fair, and that ρ A is accepting iff ρ B is accepting. Claim 3. ρ A is strongly fair.By Claims 1 and 2 and the definition of σ A , there is a time t such that S At = S Bt for every t ≥ t (intuitively, t is the time at which all nodes have stabilized in phase 2). Since σ B isstrongly fair by hypothesis, and strong fairness is independent of the properties of any finiteprefix, σ A is also strongly fair. So ρ A is strongly fair. (cid:3) Claim 4. ρ A is accepting iff ρ B is accepting.Let ρ A = ( C A , C A , C A , . . . ), and let v be an arbitrary node of G . It suffices to prove that C At ( v ) = C Bt ( v ) holds for every t ≥ t v . (Indeed, by definition a run is accepting iff everynode eventually visits accepting states only, and so, since C At ( v ) = C Bt ( v ) for every t ≥ t v ,this holds for ρ A iff it holds for ρ B .) We proceed by induction on t . Base: t = t v . Let q v be the initial state of v . We prove C At v ( v ) = q v = C Bt v ( v ). We have C At ( v ) = q v for every t ≤ t v because v / ∈ S At for any t ≤ t v . Moreover, we have C Bt v ( v ) = q v because v moves to q v the last time it moves to phase 1 (case (0.b)), and stays in q v untilit and all its neighbors reach phase 2 (case (2.b)). But this is precisely the time t v : Since v never leaves phase 2 again, neither do its neighbors (otherwise they would “drag” v tophase 0 with them). Step: t > t v . By induction hypothesis we have C At − ( v ) = C Bt − ( v ), and by the definitionof σ A we have v ∈ S At iff v ∈ S Bt . So it suffices to show C At − ( u ) = C Bt − ( u ) for everyneighbor u of v . Fix a neighbor u . Consider two cases: t ≥ t u . Then C At − ( u ) = C Bt − ( u ) follows from the induction hypothesis applied to thenode u . t < t u . Let q u be the initial state of u . Since, by definition, σ A never selects u beforetime t u , we have C At − ( u ) = q u . We show C Bt − ( u ) = q u . Since t < t u holds but u willnever leave phase 2 after t by hypothesis, some neighbor of u will still change its phaseafter t . So its neighbor is in phase 1. But all nodes in phase 1 are in their initial state.. So its neighbor is in phase 1. But all nodes in phase 1 are in their initial state.