Decision Power of Weak Asynchronous Models of Distributed Computing
Philipp Czerner, Roland Guttenberg, Martin Helfrich, Javier Esparza
DDecision Power of Weak AsynchronousModels of Distributed Computing ∗ Philipp Czerner , Roland Guttenberg ,Martin Helfrich , Javier Esparza{czerner, guttenbe, helfrich, esparza}@in.tum.deDepartment of Informatics, TU München, GermanyFebruary 24, 2021
Esparza and Reiter have recently conducted a systematic comparativestudy of models of distributed computing consisting of a network of identicalfinite-state automata that cooperate to decide if the underlying graph of thenetwork satisfies a given property. The study classifies models according tofour criteria, and shows that twenty initially possible combinations collapseinto seven equivalence classes with respect to their decision power, i.e. theproperties that the automata of each class can decide. However, Esparzaand Reiter only show (proper) inclusions between the classes, and so donot characterise their decision power. In this paper we do so for labelling properties, i.e. properties that depend only on the labels of the nodes, butnot on the structure of the graph. In particular, majority (whether morenodes carry label a than b ) is a labelling property. Our results show thatonly one of the seven equivalence classes identified by Esparza and Reitercan decide majority for arbitrary networks. We then study the expressivepower of the classes on bounded-degree networks, and show that three classescan. In particular, we present an algorithm for majority that works forall bounded-degree networks under adversarial schedulers, i.e. even if thescheduler must only satisfy that every node makes a move infinitely often,and prove that no such algorithm can work for arbitrary networks. ∗ This work was supported by an ERC Advanced Grant (787367: PaVeS) and by the Research TrainingNetwork of the Deutsche Forschungsgemeinschaft (DFG) (378803395: ConVeY). a r X i v : . [ c s . F L ] F e b . Introduction A common feature of networks of natural or artificial devices, like molecules, cells,microorganisms, or nano-robots, is that agents have very limited computational powerand no identities. Traditional distributed computing models are often inadequate tostudy the power and efficiency of these networks, which has led to a large variety of newmodels, including population protocols [4, 3], chemical reaction networks [28], networkedfinite state machines [15], the weak models of distributed computing of [20], and thebeeping model [14, 1] (see e.g. [17, 26] for surveys and other models).These new models share several characteristics [15]: the network can have an arbitrarytopology; all nodes run the same protocol; each node has a finite number of states,independent of the size of the network or its topology; state changes only depend onthe states of a bounded number of neighbours; nodes do not know their neighbours, inthe sense of [2]. Unfortunately, despite such substantial common ground, the modelsstill exhibit much variability. In [16] Esparza and Reiter have recently identified fourfundamental criteria according to which they diverge:•
Detection . In some models, nodes can only detect the existence of neighbours ina certain state, e.g., [1, 20], while in others they can count their number up to afixed threshold, e.g., [15, 20].•
Acceptance . Some models compute by stable consensus , requiring all nodes toeventually agree on the outcome of the computation, e.g. [4, 3, 28]; others requirethe nodes to produce an output and halt, e.g. [20, 22].•
Selection . Some models allow for liberal selection : at each moment, an arbitrarysubset of nodes is selected to take a step [15, 27].
Exclusive models (also called interleaving models) select exactly one node (or one pair of neighbouring nodes)[4, 3, 28].
Synchronous models select all nodes at each step e.g.,[20] or classicalsynchronous networks [24].•
Fairness . Some models assume that selections are adversarial , only satisfying theminimal requirement that each node is selected infinitely often [18, 23]. Othersassume stochastic or pseudo-stochastic selection (meaning that selections satisfy afairness assumption capturing the main features of a stochastic selection) [4, 3, 28].In this case, the selection scheduler is a source of randomness that can be tappedby the nodes to ensure e.g. that eventually all neighbours of a node will be indifferent states.In [16], Esparza and Reiter initiated a comparative study of the computational power ofthese models. They introduced distributed automata , a generic formalism able to captureall combinations of the features above. A distributed automaton consists of a set of rulesthat tell the nodes of a labelled graph how to change their state depending on the states oftheir neighbours. Intuitively, the automaton describes an algorithm that allows the nodesto decide whether the graph satisfies a given property. The decision power of a class ofautomata is the set of graph properties they can decide, for example whether the graph2 afDaf dAfDaF DAf dAFDAF dafDaf dAfDaF DAf dAFDAF TrivialCutoff (1)
CutoffNL dafDaf dAfDaF DAf dAFDAF
TrivialCutoff (1)
Maj ⊆ · ⊆
ISMNSPACE ( n ) Figure 1: The seven distributed automata models of [16]; their decision power w.r.t.labelling predicates for arbitrary networks, and for bounded-degree networks.
ISM stands for invariant under scalar multiplication . The other complexityclasses are defined in Section 5.contains more red nodes than blue nodes (the majority property), or whether the graphis a cycle. The main result of [16] was that the twenty-four classes obtained by combiningthe features above collapse into only seven equivalence classes w.r.t. their decision power.The collapse is a consequence of a fundamental result: the selection criterion does notaffect the decision power. That is, the liberal, exclusive, or synchronous versions of aclass with the same choices in the detection, acceptance, and fairness categories, havethe same decision power. The seven equivalence classes are shown on the left of Figure 1,where D and d denote detection with and without the ability to count; A and a denoteacceptance by stable consensus and by halting; and F and f denote pseudo-stochasticand adversarial fairness constraints. So, for example, DAf corresponds to the class ofdistributed automata in which agents can count, acceptance is by stable consensus, andselections are adversarial. (As mentioned above, the selection component is irrelevant,and one can assume for example that all classes have exclusive selection.) Intuitively, thecapital letter corresponds to the option leading to higher decision power.The results of [16] only prove inclusions between classes and separations, but give noinformation on which properties can be decided by each class, an information availablee.g. for multiple variants of population protocols [6, 3, 7, 11, 19, 25]. In this paper, wecharacterise the decision power of all classes of [16] w.r.t. to labelling properties, i.e.properties that depend only on the labels of the nodes. Formally, given a labelled graph G over a finite set Λ of labels, let L G : Λ → N be the label count of G that assigns to eachlabel the number of nodes carrying it. A labelling property is a set L of label counts.A graph G satisfies L if L G ∈ L , and a distributed automaton decides L if it recognisesexactly the graphs that satisfy L . For example, the majority property is a labellingproperty, while the property of being a cycle is not.Our first collection of results is shown in the middle of Figure 1. We prove that allclasses with halting acceptance can only decide the trivial labelling properties ∅ and N Λ .More surprisingly, we further prove that the computational power of DAf , dAf , and dAF is very limited. Given a labelled graph G and a number K , let d L G e K be the result of3ubstituting K for every component of L G larger than K . The classes DAf , dAf can decidea property L iff membership of L G in L depends only on d L G e , and dAF iff membershipdepends only on d L G e K for some K ≥
1. In particular, none of these classes can decidemajority. Finally, moving to the top class
DAF causes a large increase in expressivepower:
DAF can decide exactly the labelling properties in the complexity class NL , i.e. theproperties L such that a nondeterministic Turing machine can decide membership of L G in L using logarithmic space in the number of nodes of G . In particular, DAF -automatacan decide majority, or whether the graph has a prime number of nodes.In the last part of the paper, we obtain our second and most interesting collection ofresults. Molecules, cells, or microorganisms typically have short-range communicationmechanisms, which puts an upper bound on their number of communication partners.So we re-evaluate the decision power of the classes for bounded-degree networks, as alsodone in [3] for population protocols on graphs. Intuitively, nodes know that they haveat most k neighbours for some fixed number k , and can exploit this fact to decide moreproperties. Our results are shown on the right of Figure 1. Both DAF and dAF boosttheir expressive power to
NSPACE ( n ), where n is the number of nodes of the graph.This is the theoretical upper limit since each node has a constant number of bits ofmemory. Further, the class DAf becomes very interesting. While we are not yet able tocompletely characterise its expressive power, we show that it can only decide properties invariant under scalar multiplication ( ISM ), i.e. labelling properties L such that L G ∈ L iff λ · L G ∈ L for every λ ∈ N , and that it can decide all properties satisfied by a graph G iff L G is a solution to a system of homogeneous linear inequalities. In particular, DAf candecide majority, and we have the following surprising fact. If nodes have no informationabout the network, then they require stochastic-like selection to decide majority; however,if they know an upper bound on the number of their neighbours, they can decide majorityeven with adversarial selection. In particular, there is a synchronous majority algorithmfor bounded-degree networks.The paper is structured as follows. Section 2 recalls the automata models and theresults of [16]. Section 3 presents fundamental limitations of their decision power. Section4 introduces a notion of simulating an automaton by another, and uses it to show thatdistributed automata with more powerful communication mechanisms can be simulatedby standard automata. Section 5 combines the results of Sections 3 and 4 to characterisethe decision power of the models of [16] on labelling properties (middle of Figure 1).Section 6 does the same for bounded-degree networks.Due to the nature of this research, we need to state and prove many results. To staywithin the page limit, each section concentrates on the most relevant result; all othersare only stated, and their proofs are given in an appendix.
2. Preliminaries
Given sets
X, Y , we denote by 2 X the power set of X , and by X Y the set of func-tions Y → X . We define a closed interval [ m : n ] := { i ∈ Z : m ≤ i ≤ n } and [ n ] := [0 : n ],for any m, n ∈ Z such that m ≤ n . 4 multiset over a set X is an element of N X . Given a multiset M ∈ N X and β ∈ N , welet d M e β denote the multiset given by d M e β ( x ) := M ( x ) if M ( x ) < β and d M e β ( x ) := β otherwise. We say that d M e β is the result of cutting off M to β , and call the functionthat assigns d M e β to M the cutoff function for β .Let Λ be a finite set. A ( Λ -labelled, undirected) graph is a triple G = ( V, E, λ ),where V is a finite nonempty set of nodes , E is a set of undirected edges of the form e = { u, v } ⊆ V such that u = v , and λ : V → Λ is a labelling . Convention : Throughout the paper, all graphs are labelled, have at least three nodes,and are connected.
Distributed automata [16] take a graph as input, and either accept or reject it. We firstdefine distributed machines.
Distributed machines
Let Λ be a finite set of labels and let β ∈ N + . A (distributed)machine with input alphabet Λ and counting bound β is a tuple M = ( Q, δ , δ, Y, N ),where Q is a finite set of states , δ : Λ → Q is an initialisation function , δ : Q × [ β ] Q → Q is a transition function , and Y, N ⊆ Q are two disjoint sets of accepting and rejecting states, respectively. Intuitively, when M runs on a graph, each node v (or agent ) withlabel γ is initially in state δ ( γ ) and uses δ to update its state, depending on the numberof neighbours it has in each state; however v can only detect if it has 0 , , ..., ( β − β neighbours in a given state. We call β the counting bound of M .Transitions given by δ are called neighbourhood transitions . We write q, N 7→ q for δ ( q, N ) = q . If q = q the transition is silent and may not be explicitly specified inour constructions. Sometimes δ , Y, N are also irrelevant and not specified, and we justwrite M = ( Q, δ ). Given M , write M × Q to denote the machine ( Q × Q , δ ) , where δ (( q, r ) , N ) := ( δ ( q, N Q ) , r ) and N Q ( s ) := P t ∈ Q N (( s, t )) for all q, s ∈ Q , r ∈ Q , andneighbourhoods N : Q → [ β ]. Intuitively, M × Q simply extends M by adding anunused second component to each state. Selections, schedules, configurations, runs, and acceptance A selection of a graph G = ( V, E, λ ) is a set S ⊆ V . A schedule is an infinite sequence of selections σ =( S , S , S , ... ) ∈ (2 V ) ω such that for every v ∈ V , there exist infinitely many t ≥ v ∈ S t . Intuitively, S t is the set of nodes activated by the scheduler at time t , andschedules must activate every node infinitely often.A configuration of M = ( Q, δ , δ, Y, N ) on G is a mapping C : V → Q . We let N Cv : Q → [ β ] denote the neighbourhood function that assigns to each q ∈ Q the numberof neighbours of v in state q at configuration C , up to threshold β ; in terms of thecutoff function, N Cv = d M Cv e β , where M Cv ( q ) = (cid:12)(cid:12) { u : { u, v } ∈ E ∧ C ( u ) = q } (cid:12)(cid:12) . The successor configuration of C via a selection S is the configuration succ δ ( C, S ) obtainedfrom C by letting all nodes in S evaluate δ simultaneously, and keeping the remainingnodes idle. Formally, succ δ ( C, S )( v ) = δ (cid:0) C ( v ) , N Cv (cid:1) if v ∈ S and succ δ ( C, S )( v ) = C ( v )if v ∈ V \ S . We write C → C if C = succ ( C, S ) for some selection S , and → ∗ for the5eflexive and transitive closure of → . Given a schedule σ = ( S , S , S , ... ), the run of M on G scheduled by σ is the infinite sequence ( C , C , C , ... ) of configurations definedinductively as follows: C ( v ) = δ ( λ ( v )) for every node v , and C t +1 = succ δ ( C t , S t ). Wecall C the initial configuration . A configuration C is accepting if C ( v ) ∈ Y for every v ∈ V , and rejecting if C ( v ) ∈ N for every v ∈ V . A run ρ = ( C , C , C , ... ) of M on G is accepting resp. rejecting if there is t ∈ N such that C t is accepting resp. rejecting forevery t ≥ t . This is called acceptance by stable consensus in [4]. Distributed automata A scheduler is a pair Σ = ( s, f ), where s is a selection constraint that assigns to every graph G = ( V, E, λ ) a set s ( G ) ⊆ V of permitted selections suchthat every node v ∈ V occurs in at least one selection S ∈ s ( G ), and f is a fairnessconstraint that assigns to every graph G a set f ( G ) ⊆ s ( G ) ω of fair schedules of G . Wecall the runs of a machine with schedules in f ( G ) fair runs (with respect to Σ ).A distributed automaton is a pair A = ( M, Σ ), where M is a machine and Σ is ascheduler satisfying the consistency condition : for every graph G , either all fair runs of M on G are accepting, or all fair runs of M on G are rejecting. Intuitively, whether M accepts or rejects G is independent of the scheduler’s choices. A accepts G if some fairrun of A on G is accepting, and rejects G otherwise. The language L ( A ) of A is the setof graphs it recognises. The property decided by A is the predicate ϕ A on graphs suchthat ϕ A ( G ) holds iff G ∈ L ( A ). Two automata are equivalent if they decide the sameproperty. Esparza and Reiter classify automata according to four criteria: detection capabilities,acceptance condition, selection, and fairness. The first two concern the distributedmachine, and the last two the scheduler.
Detection
Machines with counting bound β = 1 or β ≥ non-counting or counting , respectively (abusing language, non-counting is considered a special case ofcounting). Acceptance
A machine is halting if its transition function does not allow nodes to leaveaccepting or rejecting states, i.e. δ ( q, P ) = q for every q ∈ Y ∪ N and every P ∈ [ β ] Q .Intuitively, a node that enters an accepting/rejecting state cannot change its mind later.Halting acceptance is a special case of acceptance by stable consensus. Selection
A scheduler Σ = ( s, f ) is synchronous if s ( G ) = { V } for every G = ( V, E, λ )(at each step all nodes make a move); exclusive if s ( G ) = {{ v } | v ∈ V } (at each stepexactly one node makes a move); and liberal if s ( G ) = 2 V (at every step some set ofnodes makes a move). 6 airness A schedule σ = ( S , S , ... ) ∈ s ( G ) ω of a graph G is pseudo-stochastic iffor every finite sequence ( T , ..., T n ) ∈ s ( G ) ∗ there exist infinitely many t ≥ S t , ..., S t + n ) = ( T , ..., T n ). Loosely speaking, every possible finite sequence ofselections is scheduled infinitely often. A scheduler Σ = ( s, f ) is adversarial if for everygraph G , the set f ( G ) contains all schedules of s ( G ) ω (i.e. the only unfair runs underadversarial scheduling are those in which a node is only selected finitely many times),and pseudo-stochastic if it contains precisely the pseudo-stochastic schedules.Whether a schedule σ of a graph G = ( V, E, λ ) is pseudo-stochastic or not dependson s ( G ). For example, if s ( G ) = { V } , i.e. if the only permitted selection is to select allnodes, then the synchronous schedule V ω is pseudo-stochastic, but if s ( G ) = 2 V , i.e. ifall selections are permitted, then it is not.This classification yields 24 classes of automata (four classes of machines and six classesof schedulers). It was shown in [16] that the decision power of a class is independentof the selection type of the scheduler (liberal, exclusive, or synchronous). This leaves 8classes, which we denote using the following scheme: Detection Acceptance Fairness d : non-counting a : halting f : adversarial scheduling D : counting A : stable consensus F : pseudo-stochastic schedulingIntuitively, the uppercase letter corresponds to the more powerful variant. Each class ofautomata is denoted by a string xyz ∈ { d , D } × { a , A } × { f , F } . Finally, it was shown in[16] that daf and daF have the same decision power, yielding the seven classes on theleft of Figure 1.In the rest of the paper, we generally assume that selection is exclusive (exactly onenode is selected at each step). Since for synchronous automata there is only one permittedselection, adversarial and pseudo-stochastic scheduling coincide, and we therefore denotesynchronous classes by strings xy $ ; for example, we write DA$ .
3. Limitations
Our lower bounds on the decision power of the seven classes follow from several lemmataproving limitations of their discriminating power, i.e. of their ability to distinguish twographs by accepting the one and rejecting the other. We present four limitations. Westate the first three, and prove the last one, a non-trivial limitation of dAF -automata. Allproofs can be found in Appendix A. Recall that ϕ A denotes the property decided by theautomaton A . Automata with halting acceptance cannot discriminate cyclic graphs.
Automata with halting acceptance necessarily accept all graphs containing a cycle, orreject all graphs containing a cycle. Intuitively, given two graphs G and H with cycles, ifone is accepted and the other rejected, one can construct a larger graph in which some7odes behave as if they were in G , others as if they were in H . This makes some nodesaccept and others reject, contradicting that the automaton accepts or rejects every graph. Lemma 1.
Let A be a DaF -automaton. For all graphs G and H containing a cycle, ϕ A ( G ) = ϕ A ( H ) . Automata with adversarial selection cannot discriminate a graph and its covering.
Given two graphs G = ( V G , E G , λ G ) and H = ( V H , E H , λ H ), we say that H covers G if there is a covering map f : V H → V G , i.e. a surjection that preserves labels andneighbourhoods by mapping the neighbourhood of each v in H bijectively onto theneighbourhood of f ( v ) in G . Automata with adversarial selection cannot discriminate agraph from another one covering it. Intuitively, if H covers G then a node u of H andthe node f ( u ) of G visit the same sequence of states in the synchronous runs of A on G and H . Since these runs are fair for adversarial selection, both nodes accept, or bothreject. Lemma 2.
Let A be a DAf -automaton. For all graphs G and H , if H is a covering of G , then ϕ A ( G ) = ϕ A ( H ) . Let L G : Λ → N assign to each label ‘ ∈ Λ the number of nodes v ∈ V such that λ ( v ) = ‘ . We call L G the label count of G . Recall that a labelling property depends onlyon the label count of a graph, not on its structure. Corollary 3.
Let A be a DAf -automaton deciding a labelling property. For all graphs G and H , if L H = λL G for some λ ∈ N > , then ϕ A ( G ) = ϕ A ( H ) . This also holds whenrestricting to k -degree-bounded graphs. Automata with adversarial selection and non-counting automata cannotdiscriminate beyond a cutoff.
Our final results show that for every
DAf - or dAF -automaton deciding a labelling propertythere is a number K such that whether the automaton accepts a graph G or not dependsonly on d L G e K , and not on the “complete” label count L G . For DAf -automata, the cutoff K is simply β + 1, where β is the counting bound. Lemma 4.
Let A be a DAf -automaton with counting bound β that decides a labellingproperty. For all graphs G and H , if d L G e β +1 = d L H e β +1 then ϕ A ( G ) = ϕ A ( H ) . The proof that dAF -automata also cannot discriminate beyond a cut-off is more involved,and the cutoff value K is a complex function of the automaton. We give a comprehensiveproof sketch; for the remaining details see Appendix A. Lemma 5.
Let A be a dAF -automaton that decides a labelling property. There exists K ≥ such that for every graph G and H , if d L G e K = d L H e K then ϕ A ( G ) = ϕ A ( H ) . sketch). Let A be a dAF -automaton, and let Q be its set of states. In this proof weconsider the class of star graphs . A star is a graph in which a node called the centre isconnected to an arbitrary number of nodes called the leaves , and no other edges exist.Importantly, for every graph G , there is a star G with the same label count. We considerlabelling properties (which do not depend on the graph), so if the property has a cutofffor star graphs, then the property has a cutoff in general. A configuration of a stargraph G is completely determined by the state of the centre and the number of leavesin each state. So in the rest of the proof we assume that such a configuration is a pair C = ( C ctr , C sc ), where C ctr denotes the state of the centre of G , and C sc is the statecount of C , i.e. the mapping that assigns to each q ∈ Q the number C sc ( q ) of leaves of G that are in state q at C . We denote the cutoff of C as d C e m := ( C ctr , d C sc e m ).Given a configuration C of A , recall that C is rejecting if all nodes have rejectingstates. We say that C is stably rejecting if C can only reach configurations which arerejecting. Given an initial configuration C , it is clear that A must reject if it can reacha stably rejecting configuration C from A . Conversely, if it cannot reach such a C , then A will not reject C , as there is a fair run starting at C which contains infinitely manyconfigurations that are not rejecting.In the appendix we will use Dickson’s Lemma to show that there is a constant m s.t. aconfiguration C of A on a star is stably rejecting iff d A e m is. For this it is crucial thatfor stars stable rejection is downwards closed in the following sense: if such a C is stablyrejecting and has at least two leaves in a state q , then the configuration C that resultsfrom removing one of these leaves is still stably rejecting.Now, let C = ( C ctr , C sc ) denote a configuration of A on a star G = ( V, E ), and let q denote a state with C sc ( q ) ≥ ( | Q | − m + 1. We will show: if A rejects C then it mustalso reject the configuration C = ( C ctr , C sc ) which results from adding a leaf v new instate q to G , i.e. C ctr := C ctr , C sc ( q ) := C sc ( q ) + 1, and C sc ( q ) := C sc ( r ) for states r = q .We know that A rejects C , so there is some stably rejecting configuration D reachablefrom C . Our goal is to construct a configuration D reachable from C which fulfils d D e m = d D e m , implying that D would also be stably rejecting. For this, let S ⊆ V denote the leaves of G which are in state q in C . There are | Q | states and m ( | Q | −
1) + 1nodes in S , so by the pigeonhole principle there is a state r ∈ Q s.t. in configuration D at least m nodes in S are in state r . Let v old denote one of these nodes.To get D , we construct a run starting from C , where v new behaves exactly as v old ,until D is reached. Afterwards, the nodes may diverge because of the pseudo-stochasticscheduler. However, this does not matter as D is stably rejecting.Let ρ = ( v , ..., v ‘ ) ∈ V ∗ denote a sequence of selections for A to go from C to D . Weconstruct the sequence σ ∈ V ∗ by inserting a selection of v new after every selection of v old , and define D as the configuration which A reaches after executing σ from C . Weclaim that D is the same as D , apart from having an additional leaf in the same stateas v old .This follows from a simple induction: v old and v new start in the same state and seeonly the root node. As they are always selected subsequently, they will remain in thesame state as each other. For the centre we use the property that A cannot count: itcannot differentiate between seeing just v old , or seeing an additional node in the same9tate. We remark that G being a star is crucial for this argument, which does not extendto e.g. cliques.To summarise, we have shown that for every rejected star G and state q with L G ( q ) ≥ m ( | Q | −
1) + 2 (note the centre), the input H obtained by adding a node with label q to G is still rejected. An analogous argument shows that the same holds for acceptance,and by induction we find that K := m ( | Q | −
1) + 2 is a valid cutoff.Since the majority property does not have a cutoff, in particular we obtain:
Corollary 6. No DAf - or dAF -automaton can decide majority.
4. Extensions
We introduce automata with more powerful communication mechanisms, and show thatthey can be simulated by standard automata with only neighbourhood transitions. Wefirst present our notion of simulation (Definitions 1-3), and then in Sections 4.1-4.3 extendautomata with weak versions of broadcast (a node sends a message to all other nodes)and absence detection (a node checks globally if there exists a node occupying a givenstate), and with communication by rendezvous transitions (two neighbours change statesimultaneously).
Definition 1.
Let G = ( V, E, λ ) be a labelled graph and let
Q, Q denote sets of states,with Q ⊆ Q . For configurations C , C : V → Q we define the relation ∼ Q as C ∼ Q C iff C ( v ) = C ( v ) for all v with C ( v ) ∈ Q and C ( v ) ∈ Q . Let π, π denote runs over states Q and Q , respectively. We say that π is an extension of π if there exists a monotonicallyincreasing g : N → N with π ( i ) = π ( g ( i )) for all i ∈ N , and π ( j ) ∼ Q π ( g ( i )) or π ( j ) ∼ Q π ( g ( i + 1)) for all g ( i ) ≤ j ≤ g ( i + 1).To implement complicated transitions in an automaton without extensions, we haveto decompose these transitions into multiple neighbourhood transitions. This maps tothe notion of extension: instead of performing, say, a broadcast atomically in one step,agents will use many neighbourhood transitions, moving into intermediate states in theprocess. We mostly take a “black-box” approach to these intermediate states and donot assume that they have any additional properties based on the specific constructionused. As mentioned in Section 2, by [16] we are free to use liberal or exclusive selectionwithout changing the decision power, we assume that selection is exclusive, unless statedotherwise. Definition 2.
Let G = ( V, E, λ ) be a labelled graph. Let π, π denote runs of anautomaton induced by the schedules v, v ∈ V ω , respectively. Let I, I denote the set ofindices where π or π , respectively, execute non-silent transitions, i.e. I := { i : π i = π i +1 } .We say that π is a reordering of π if there exists a bijection f : I → I s.t. v ( i ) = v ( f ( i ))for all i ∈ N , and f ( i ) < f ( j ) for all i < j where the nodes v ( i ) and v ( j ) are adjacent oridentical. If that is the case, we also write π f := π for the reordering induced by f .10hile an extension of a run can execute a single complicated transition in many stepsinstead of atomically, it cannot “interleave” different transitions, or different phasesof a single transition. However, it is not possible to guarantee that property for ourimplementations, as e.g. the information that a broadcast has been initiated takes timeto propagate. In the meantime, other agents could perform neighbourhood transitions.This is where reorderings are used: We will guarantee that every run can be reordered sothat transitions do not “interleave”. We will only allow reordering of nodes that are notadjacent, thus ensuring that a reordering performs the same transitions and answers thesame. Lastly, we now define a concept encompassing all our more powerful mechanisms,therefore allowing us to only state the definition of simulation once. Definition 3.
We say that P = ( Q, Run , δ , Y, N ) is a generalised graph protocol , where Q are states, δ , Y, N are initialisation function, accepting states and rejecting states,respectively, and Run is a function mapping every labelled graph G = ( V, E, λ ) over agiven alphabet Λ to a subset Run( G ) ⊆ ( Q V ) ω of fair runs. We define accepting/rejectingruns and the statement “ P decides a predicate ϕ ” analogously to distributed automata.Further, let P be an automaton with states Q ⊇ Q . We say that P simulates P , if forevery fair run π of P there is a reordering π f of π and a fair run π ∈ Run of P , s.t. π f is an extension of π . If P simulates P , we refer to the states in Q \ Q as intermediate states.We will apply this general definition to simulate broadcast, absence-detection, andrendezvous transitions by automata with only neighbourhood transitions. First, weremark that in a reordering of a run, whenever a node is selected, it sees the sameneighbourhood as in the original run. We show this in the Appendix as Lemma 16.Furthermore, we show that simulation is, in some sense, stronger than equivalence, i.e.deciding the same predicate. Lemma 7.
Let P = ( Q, Run , δ , Y, N ) denote a generalised graph protocol decidinga predicate ϕ , and P an automaton simulating P . Then there is an automaton P simulating P which also decides ϕ . The automaton P constructed in the proof of this lemma is basically P , exceptthat nodes remember their last state q ∈ Q in addition to their state in Q . Thisallows us to define accepting/rejecting states as states q ∈ Q for which last ( q ) ∈ Q isaccepting/rejecting in P . With this remark, we again proceed to leave out accepting andrejecting states in constructions of automata. Intuitively, a broadcast B ( q ) = ( r, f ) models a signal sent by some agent, which we willrefer to as initiating agent. After sending the signal, there is a local update, moving theinitiator from q to r , and a global one, where each other agent moves according to f .In our model the broadcasts are weak, which means that multiple broadcasts can occurat the same time and interfere with each other. In this case, all initiating agents stillperform their local updates, but the scheduler can decide which broadcast each other11gent receives. It is only guaranteed that every (non-initiating) agent receives exactlyone broadcast, and that this broadcast has actually been sent. Definition 4. A distributed machine with (weak) broadcasts is defined as a five-tuple M = ( Q, δ , δ, Q B , B, Y, N ), where ( Q, δ , δ, Y, N ) is a distributed machine, Q B ⊆ Q isa set of broadcast-initiating states, and B : Q B → Q × Q Q a set of (weak) broadcasttransitions . In particular, B maps a state q to a pair ( q , f ), with q a state and f : Q → Q denoting a response function. We will write broadcast transitions as q r, f , where f is usually given as a set { r f ( r ) : r ∈ Q } . (Mappings r r , and silent transitions q q, id may be omitted, id being the identity function.) Given a configuration C , abroadcast transition is executed on a selection S ⊆ V with C ( S ) ⊆ Q B by moving to anyconfiguration C with C ( v ) = q for v ∈ S with B ( C ( v )) = ( q , f ) C ( v ) = f ( C ( v )) for v / ∈ S s.t. B ( C ( u )) = ( q , f ) for some u ∈ S The set of valid selections is I := { S ⊆ V : S = ∅ is an independent set } . A schedule of M is a sequence σ ∈ ( { n, b } × I ) ω . Given a schedule σ , we generate a run π = ( C , C , ... )as follows. For each step i ≥ σ ( i ) = ( n, S ) for S ⊆ V and we execute aneighbourhood transition for S := S \ C − i ( Q B ), or σ ( i ) = ( b, S ) for S ⊆ V and weexecute a weak broadcast transition on S := S ∩ C − i ( Q B ). (If S is empty in either case,we set C i +1 := C i instead.)A schedule σ is adversarial if there are infinitely many i with σ ( i ) = ( b, S ) for some S , or for all v ∈ V there are infinitely many i with σ ( i ) = ( n, S ) and v ∈ S . It is pseudo-stochastic , if every finite sequence of selections w ∈ ( { n, b } × I ) ∗ appears infinitelyoften in σ . An xyz -automaton with weak broadcasts is a tuple ( M, Σ) defined analogouslyto an xyz -automaton, where M is a distributed machine with weak broadcasts and xyz ∈ { d , D } × { a , A } × { f , F } . In particular, we extend the definitions of fair runs,consensuses, and acceptance.A strong broadcast protocol is a tuple P = ( Q, δ , B, Y, N ) defined analogously to a dAF -automaton with weak broadcasts ( Q, δ , ∅ , Q, B, Y, N ), except that the set of validselections is I := {{ v } : v ∈ V } , meaning that broadcasts are executed exclusively. Thiscorresponds to broadcast consensus protocols introduced in [11]. We sometimes use the notation (
Q, δ )+ B to denote ( Q, δ, Q B , B ), where Q B is implicitlygiven as the states initiating non-silent broadcasts in B , i.e. Q B := { q : B ( q ) = ( q, id) } .As for graph-automata, we usually specify only the machine M ; the scheduler is givenimplicitly by the fairness condition and selection criteria. We also leave off initialisationfunction and accepting/rejecting states as appropriate. Additionally, to simplify ourproofs we will assume that ( n, S ) selections have | S | = 1, i.e. the scheduler selects agentsfor neighbourhood transitions exclusively. As S can only contain non-adjacent nodes,this does not change the model. The model in [11] also contains rendez-vous transitions, but they can be removed without affectingexpressive power. axxab axaab aaaxb aaaab bxxxx axab axaab xaab xaa aa a a aa aaa aaax aaaxbx axab xa xa xaax x xa
Figure 2: Sample runs of a broadcast protocol. (a) A prefix of a run of P on a line withfive nodes (shown on the left). (b) An extension of the same run. Only thefirst 12 steps are shown. (cid:4) denotes intermediate states. (c) A reordering of therun shown in (b), showing only the first 4 steps. Example 8.
Consider a dAF -automaton with weak broadcasts ( { a, b, x } , δ, { a, b } , B ) ,where for δ we have only transition x, N a for all neighbourhoods N : Q → [1] with N ( a ) > (i.e. an agent moves from x to a if it has a neighbour in a ). We define B as a a, { x a } , b b, { b a, a x } Figure 2 shows sample runs of this protocol on the line with five nodes. Note that thesimultaneous broadcasts of the two ends of the line interfere with each other. However,the next broadcast is initiated by a single node and reaches all nodes. The reorderingdepicted in (c) shows the interleaving of two different transitions: while the two endshave already initiated broadcasts, the information has not reached the middle node, and itcan execute a neighbourhood transition.
Of course, our model of weak broadcasts would be of limited use if we were not able tosimulate it. To do this, we use a similar construction as the three-phase protocol of thealpha-synchroniser [8], which Esparza and Reiter used to implement the synchronousscheduler [16]. Instead of simply using it to synchronise, we will propagate additionalinformation, allowing the agents to perform the local update necessary to execute thebroadcast.
Lemma 9.
Every automaton with weak broadcasts is simulated by some automatonwithout weak broadcasts of the same class.(sketch).
Let P = ( Q, δ, Q B , B ) denote an automaton with weak broadcasts. We willdefine an automaton P = ( Q , δ ) simulating P . The protocol P will have three phases,and a node will move forward a phase only if every neighbour is in the current or thenext phase. Our states are Q := Q ∪ Q × { , } × Q Q . A state ( q, i, f ) means that theagent is in state q , phase i , and currently executing broadcast with response function f .In phase 0 no broadcasts are executed, this corresponds to states Q .Let β denote the counting bound of P . To specify the transitions, for a neighbourhood N : Q → [ β ] we write N [ i ] := P q,f N (( q, i, f )) for i ∈ { , } and N [0] := P q ∈ Q N ( q )to denote the number of adjacent agents in a particular phase, and choose a function g ( N ) ∈ Q Q ∪ { (cid:3) } s.t. g ( N ) = f = (cid:3) implies N (( q, , f )) >
0, and g ( N ) = (cid:3) implies N [1] = 0. The function g is used to select which broadcast to execute, if there are13ultiple possibilities. We define the following transitions for δ , for all states q ∈ Q andneighbourhoods N : Q → [ β ]. q, N δ ( q, N ) if q / ∈ Q B and N [0] = | N | (1) q, N ( q , , f ) if q ∈ Q B and N [0] = | N | , with ( q , f ) := B ( q ) (2) q, N ( f ( q ) , , f ) if g ( N ) = f = (cid:3) (3)( q, , f ) , N ( q, , f ) if N [0] = 0 (4)( q, , f ) , N q if N [1] = 0 (5)If all neighbours are in phase 0, the agent either executes a neighbourhood transition via(1) or it initiates the broadcast in (2), depending on the state of the agent. For the latter,the agent immediate performs the local update. Once there is a phase 1 neighbour, theagent instead executes the broadcast of one of its neighbours via (3) (if there are multiple, g is used to select one). Note that (2) and (3) are indeed well-defined, as N [0] = | N | holds iff g ( N ) = (cid:3) . Finally, transitions (4) and (5) move agents to the next phase, onceall of their neighbours are in the same or the next phase. Absence-detection enables agents to determine the support of the population. The agentinitiates an absence-detection transition, and would then move to a state depending onthe precise subset of states that are present in the graph. We consider a weaker version ofthis idea where, similar to our definition of weak broadcasts, multiple absence-detectiontransitions may interfere with each other. So instead of determining the support of thewhole graph, an agent would only receive information from a subset. However, it isensured that every agent is detected at least once.While it is possible to define and implement a more general model involving absence-detection, we limit ourselves to a special case to simplify our proofs. In particular, wedefine the model using the synchronous scheduler and implement a simulation only forgraphs of bounded degree.
Definition 5. A distributed machine with weak absence-detection is defined as a tuple( Q, δ , δ, Q A , A, Y, N ), where ( Q, δ , δ, Y, N ) is a distributed machine, Q A is a set of absence-detection initiating states, and A : Q A × Q → Q a set of (weak) absence-detection transitions . We execute an absence-detection on a configuration C by firstselecting a subset S ⊆ V with C ( S ) ⊆ Q A . We then assign each v ∈ S a S v ⊆ V s.t. v ∈ S v and S S v = V , and move to any configuration C with C ( v ) := A ( v, C ( S v )) for v ∈ S and C ( v ) := C ( v ) for v / ∈ S . (The S v need not be pairwise disjoint.) To definean absence-detection transition A ( q, S ) = q we write q, S q for q ∈ Q A , q ∈ Q and S ⊆ Q .We use the synchronous scheduler, so the only valid selection is V . A step at aconfiguration C is performed by having each agent execute a neighbourhood transitionsimultaneously, moving to C , followed by an absence-detection for agents in S := C − ( S A ), to go from C to C . If S is empty, the computation hangs, and we instead set14 := C . A DA$ -automaton with (weak) absence-detection is defined analogously to a
DA$ -automaton.As for broadcasts, absence detection is implemented using a three phase protocol. Toallow the information to propagate back, we use a distance labelling that effectivelyembeds a rooted tree for each initiating agent.
Lemma 10.
Every
DA$ -automaton with weak absence detection is simulated by some
DAf -automaton, when restricted to bounded-degree graphs.
Our next kind of transition are rendez-vous transitions, which are used in the well-studiedmodel of population protocols [4]. In fact, population protocols on graphs have also beenstudied previously [3], and we use exactly the same model. For completeness, we havereproduced a formal definition in Appendix B.4.A rendez-vous transitions p, q p , q allows two neighbouring nodes u and v in states p and q to interact and change their states to p and q , respectively. This is similar toneighbourhood transitions, in that it is a local operation involving only adjacent nodes.However, it requires two agents to synchronise and can be used for pairwise transactionssuch as transferring a token from one node to another. Lemma 11.
Every graph population protocol is simulated by some
DAF -automaton.
5. Unrestricted Communication Graphs
In this section we prove the characterisation of the decision power of the different classesas presented in the introduction. The classes are defined as follows. For a labellingproperty ϕ : N Λ → { , } we have• ϕ ∈ Trivial iff ϕ is either always true or always false,• ϕ ∈ Cutoff (1) iff ϕ ( L ) = ϕ ( d L e ) for all multisets L ∈ N Λ ,• ϕ ∈ Cutoff iff there exists an K ∈ N s.t. ϕ ( L ) = ϕ ( d L e K ) for all multisets L ∈ N Λ ,and• ϕ ∈ NL iff ϕ is decidable by a non-deterministic Turing-Machine using logarithmicspace.The proof proceeds in the following steps:1. DaF and therefore all automata-classes with weak acceptance have an upper boundof
Trivial and thus decide exactly
Trivial (see Appendix C.1). This proof also workswhen restricted to degree-bounded graphs.2.
DAf and therefore also dAf can decide at most
Cutoff (1) (see Appendix C.2).15. dAf and therefore also
DAf can decide at least
Cutoff (1) (see Appendix C.3).4. dAF can decide exactly
Cutoff (see Appendix C.4).5.
DAF can decide a labelling property ϕ if and only if ϕ ∈ NL .The statements for the simpler models follow rather directly from the statements inthe limitations section, we moved these proofs to the appendix. In this section we sketchthe hardest proof, the characterisation for DAF . Lemma 12.
DAF -automata decide exactly the labelling properties in NL .(sketch). First, we briefly sketch that
DAF -automata can decide only labelling propertiesin NL . As the property does not depend on the graph, it suffices to consider runson the clique. Thus, we only need to store the number of agents in each state (andnot their location in the graph), which takes logarithmic space. In [12, Proposition 4],Blondin, Esparza and Jaax define a generic consensus protocol suitable to model runs ofa DAF -automaton on a clique, and they prove that it decides only labelling properties in NL as long as the step relation is in NL , which trivially holds here.Now we show the other direction. Let P = ( Q, δ, I, O ) denote an arbitrary strongbroadcast protocol deciding a predicate ϕ . It is known that strong broadcast protocolsdecide exactly the predicates in NL [12, Theorem 15].We start with the graph population protocol P token := ( Q token , δ token ), with states Q token := { , L, L , ⊥} and rendez-vous transitions δ token given by( L, L ) (0 , ⊥ ) , (0 , L ) ( L, , ( L, ( L ,
0) 〈 token 〉Using Lemma 11 we construct a
DAF -automaton P token = ( Q token , δ token ) simulating P token .Agents in states L, L have a token , while ⊥ is an error state. We want to combine theabove with P , so we set P step := P token × Q + 〈 step 〉, where 〈 step 〉 is a weak broadcastdefined as( L , q ) ( L, q ) , { ( t, r ) ( t, f ( r )) : ( t, r ) ∈ Q token × Q } 〈 step 〉for each broadcast q q , f in δ . This yields a DAF -computation P step = ( Q step , δ step )simulating P step via Lemma 9. Crucially, a broadcast is only initiated by an agent in astate ( L , · ). We use two states ( L and L ) to ensure that an agent can initiate either abroadcast or a (simulated) rendez-vous transition, but not both.If we start the computation with more than one token, they will eventually meet usingtransition 〈 token 〉 and an agent will move into the error state ⊥ . Afterwards, we wantto restart the computation, now with fewer agents in state ( L, · ). So we again add anadditional component to each state and consider the protocol P reset := P step × Q + 〈 reset 〉,where 〈 reset 〉 are the following broadcast transitions, for each q, q ∈ Q .(( ⊥ , q ) , q ) (( L, q ) , q ) , { ( r, r ) ((0 , r ) , r ) : r ∈ Q step , r ∈ Q } 〈 reset 〉For P reset we also define the input mapping as I reset ( x ) := (( L, I ( x )) , I ( x )) and the setof accepting states as O reset := { (( r, q ) , q ) : q ∈ O, q ∈ Q, r ∈ { , L }} . Using Lemma 916and Lemma 7) we get a DAF -protocol equivalent to P reset , so it suffices to show that P reset is equivalent to P .To prove correctness, we first consider P step in isolation and show that any initialconfiguration with k > k − token 〉 and our definition of simulation. Additionally, weshow that an initial configuration with exactly one token will reach a correct consensusand never have an agent in an error state. Here we use that 〈 token 〉 allows the token tomove around, so it is possible for any agent to execute a broadcast, and that there isonly one token, so it is impossible for two broadcasts to interfere.From these properties it follows that a run of P reset starting with more than one tokenwill eventually reset and restart the computation with strictly fewer tokens, until onlyone token is left. At that point, 〈 reset 〉 will never be executed, so we are left with a runof P step , which stabilises to a correct consensus.
6. Bounded-degree Communication Graphs
In this section we characterise the decision power of the models in the case where werestrict the degree of the input graphs to at most k for some constant k ∈ N . As in theprevious sections, we move the simpler proofs to the appendix in favour of describingthe DAf -automaton for majority in more detail. Many results for the unrestricted set ofgraphs continue to work, in particular Corollary 3, showing that
DAf -automata can onlycompute properties invariant under scalar multiplication (called
ISM ) in Figure 1), as wellas the result that automata with halting acceptance can only decide trivial properties.The new results in this section are as follows:1. The expressive power of dAf is precisely
Cutoff (1) for k ≥ DAF - and dAF -automata can decide a labelling property ϕ if and only if ϕ ∈ NSPACE ( n ) (see Appendix D.2).3. DAf can decide all homogeneous threshold predicates, in particular majority.
DAf
Can Compute Majority
Let ϕ : N l → { , } , ϕ ( x , ..., x l ) ⇔ a x + ... + a l x l ≥ a , ..., a l ∈ Z , and let k denote the maximum degree of thecommunication graph. Local Cancellation
We start by defining a protocol that performs local updates. Eachagent stores a (possibly negative) integer contribution. If the absolute value of thecontribution is large, then the agent will try to distribute the value among its neighbours.In particular, if a node v has contribution x with x > k , then it will “send” one unit to eachof its neighbours with contribution y ≤ k . Those neighbours increment their contribution17y 1, while v decrements its contribution accordingly. (This happens analogously for x < − k , where − DA$ -automaton with weak absence detection P cancel := ( Q cancel , δ cancel , ∅ , ∅ ),but use only neighbourhood transitions for the moment. We use states Q cancel := {− A, ..., A } . Here A := max {| a | , ..., | a l |} ∪ { k } is the maximum contribution an agentmust be able to store: any agent with contribution x s.t. | x | ≤ k may receive an incrementor decrement from up to k neighbours, so A ≥ k + k . The transitions δ cancel are x, N x − N [ − A, − k −
1] + N [ k + 1 , A ] for x = − k, ..., kx, N x − N [ − A, k ] for x = k + 1 , ..., Ax, N x + N [ − k, A ] for x = − A, ..., − k − cancel 〉Here we write N [ a, b ] := P bi = a N ( i ) for the total number of adjacent agents with contri-bution in the interval [ a : b ]. As we use the synchronous scheduler, at each step all agentsare executed. It is thus easy to check that 〈 cancel 〉 preserves the sum of all contributions P v C ( v ) for a configuration C , and that it does not increase P v | C ( v ) | .While it is not entirely obvious, we can show that the above protocol converges, in aspecific sense. Lemma 13.
Let π = ( C , C , ... ) denote a run of P cancel with P v C ( v ) < . Then thereexists an i s.t. either all configurations C i , C i +1 , ... only have states in {− A, ..., − } , orthey only have states {− k, ..., k } . Convergence and Failure Detection
The key idea for the overall protocol is thatwe wait until P cancel converges, i.e. either all agents have “small” contributions, or allcontributions are negative. In the latter case, we can safely reject the input, as thetotal sum of contributions is negative, while in the former case we perform a broadcast,doubling all contributions. As we only double once all contributions are small, each agentcan always store the new value. This idea of alternating cancelling and doubling phaseshas been used extensively in the population protocol literature [5, 9, 10, 21].To both detect whether P cancel has already converged and perform the doubling, weelect a subset of agents as leaders. It is impossible to do a “true” leader election whereonly a single agent remains, as weak fairness prevents us from breaking certain symmetries.Instead, we will be able to determine a set of leaders which is “good enough”: whenevera failure condition occurs due to a disagreement of multiple leaders, we can eliminateone of those leaders and reset the computation, starting with a non-empty, proper subsetof the original set of leaders.We use weak absence-detection transitions to determine whether P cancel has converged.More specifically, set Q L := { , L, L double , L (cid:3) } and let ( Q, δ ) := P cancel × Q L . Thenwe define P detect := ( Q ∪ {⊥ , (cid:3) } , δ, Q cancel × { L } , A ), where A are the following fourabsence-detection transitions, for x ∈ Q cancel , s ⊆ Q ∪ {⊥ , (cid:3) } .( x, L ) , s
7→ ⊥ if (cid:3) ∈ s ( x, L ) , s ( x, L double ) if s ⊆ {− k, ..., k } × { } ( x, L ) , s ( x,
0) if ⊥ ∈ s ( x, L ) , s ( x, L (cid:3) ) if s ⊆ {− A, ..., − } × { } detect 〉Intuitively, ⊥ and Q cancel × { L, L double , L (cid:3) } are leader states, and (cid:3) is the (only) rejectingstate. State ⊥ is an error state: an agent in that state will eventually restart thecomputation. Via Lemma 10 we get a DAf -automaton P detect = ( Q detect , δ detect ) simulating P detect .We want our broadcasts to interrupt any (simulated) absence-detection transitions of P detect , by moving agents in intermediate states Q detect \ Q detect to the last “good” statein Q detect of that agent. To this end, we introduce the mapping last : Q detect → Q detect ,which fulfils last ( C i ( v )) ∈ { last ( C i − ( v )) , C i ( v ) } for all runs π = C C ... of P detect and i >
0, where C has only states of Q detect . It is, of course, not true that last exists for any simulation P detect of P detect . However, one can extend any simulation which doesnot, by having each agent “remember” its last state in Q detect .We construct a DAf -computation with weak broadcasts P bc by adding the followingtransitions to P detect .( x, L double ) (2 x, L ) , (cid:0) { ( y, (2 y,
0) : y ∈ {− k + 1 , ..., k − }}∪ { q
7→ ⊥ : q ∈ Q cancel × { L, L double , L (cid:3) }} (cid:1) ◦ last 〈 double 〉( x, L (cid:3) ) (cid:3) , (cid:0) { ( y, (cid:3) : y ∈ {− A, ..., − }}∪ { q
7→ ⊥ : q ∈ Q cancel × { L, L double , L (cid:3) }} (cid:1) ◦ last 〈 reject 〉These transitions are written somewhat unintuitively. Recall that we write a weakbroadcast transition as q q , f , where q, q ∈ Q detect are states and f : Q detect → Q detect is the transfer function. Usually, we specify f as simply a set of mappings { r f ( r ) : r ∈ Q detect } . Here, our transition essentially is q q , ( f ◦ last ) , where weuse ◦ to denote function composition, and f is given as a set of mappings. This meansthat each broadcast first moves all agents to their last state in Q detect , and then appliesthe other mappings as specified.Later, we will extend P bc with resets, which can restart the computation from anerror state. First, we will analyse the behaviour of P bc in more detail. To talk aboutaccepting/rejecting runs, we define the set of rejecting states as { (cid:3) } . (All other statesare accepting.) Let π := ( C , C , ... ) denote a fair run of P bc starting in a configuration C where all agents are in states {− A, ..., A } × { , L } , and at least one agent is in a state( · , L ). We refer to the agents starting in ( · , L ) as leaders . Note that it is not possible toenter a state in Q × { L, L double , L (cid:3) } ∪ {⊥} without being a leader. We usually disregardthe first component (if any) while referring to states of leaders.To argue correctness, we state two properties of P bc . First, it is not possible for all leaders to enter ⊥ , which ensures that a reset restarts the computation with a propersubset of the leaders. Second, P bc works correctly if no agent enters an error state. Here, L G : X → N denotes the label count of the input graph, i.e. L G ( x i ) = | C − ( a i ) | for i = 1 , ..., l . Lemma 14.
Assuming that no agents enters state ⊥ , π is accepting iff ϕ ( L G ) = 1 .Additionally, π cannot reach a configuration with all leaders in state ⊥ . esets Finally, we can add resets to the protocol, to restart the computation in caseof errors. We use Lemma 9 to construct a
DAf -computation P bc = ( Q bc , δ bc ) simulating P bc , and then set P reset := P bc × Q cancel + 〈 reset 〉, where the broadcasts are defined asfollows, for q ∈ Q cancel .( ⊥ , q ) (( q , L ) , q ) , { ( r, r ) (( r , , r ) : ( r, r ) ∈ Q bc × Q cancel } 〈 reset 〉To actually compute ϕ , we add the initialisation function I ( x i ) := (( a i , L ) , a i ) and theset of rejecting states N := { (cid:3) } to P reset (all other states are accepting). Proposition 15.
For every predicate ϕ : N l → { , } , ϕ ( x , ..., x l ) ⇔ a x + ... + a l x l ≥ ,with a , ..., a l ∈ Z there is a bounded-degree DAf -automaton computing ϕ .
7. Conclusion
We have characterised the decision power of the weak models of computation studiedin [16] for properties depending only on the labelling of the graph, not on its structure.Our results for arbitrary networks show that the initially twenty classes of automatacollapse into only four; further, only
DAF can decide majority. For bounded-degreenetworks (a well-motivated restriction in a biological setting, also used in previous work,e.g. in [3, 13]), the picture becomes more complex. Counting and non-counting automatabecome equally powerful, an interesting fact for non-counting biological models whereevents are triggered by the concentration of a substance exceeding a threshold. Further,the class
DAf , which uses adversarial scheduling, substantially increases its power, andbecomes able to decide majority. As a consequence, we obtain that majority algorithmsrequire (pseudo-)random scheduling to work correctly for arbitrary networks, but canwork correctly under adversarial scheduling for bounded-degree networks. In particular,we have given a synchronous deterministic algorithm for majority and other properties inbounded-degree networks.Decision power questions have also been studied in [20] for a model similar to
Daf . Thedistinguishing feature of our work is the systematic study of the influence of a number offeatures on the decision power. There exist numerous results about the decision powerof different classes of population protocols. Recall that agents of population protocolsare indistinguishable and communicate by rendez-vous; this is equivalent to placing theagents in a clique and selecting an edge at every step. Angluin et al. showed that standardpopulation protocols compute exactly the semilinear predicates [6]. Extensions withabsence detectors or cover-time services [25], consensus-detectors [7], or broadcasts [11]increase the power to NL (more precisely, in the case of [25] the power lies between L and NL ). Our result DAF = NL shows that these features can be replaced by a countingcapability. Further, giving an upper bound on the number of neighbours increases thedecision power to NSPACE ( n ), a class only reachable by standard population protocols(not on graphs) if agents have identities, or channels have memory [25, 19].20 eferences [1] Yehuda Afek, Noga Alon, Ziv Bar-Joseph, Alejandro Cornejo, Bernhard Haeupler,and Fabian Kuhn. Beeping a maximal independent set. Distributed Comput. ,26(4):195–208, 2013.[2] Dana Angluin. Local and global properties in networks of processors (extendedabstract). In
STOC , pages 82–93. ACM, 1980.[3] Dana Angluin, James Aspnes, Melody Chan, Michael J Fischer, Hong Jiang, andRené Peralta. Stably computable properties of network graphs. In
InternationalConference on Distributed Computing in Sensor Systems , pages 63–74. Springer,2005.[4] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Per-alta. Computation in networks of passively mobile finite-state sensors.
DistributedComputing , 18(4):235–253, 2006.[5] Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by populationprotocols with a leader.
Distributed Comput. , 21(3):183–199, 2008.[6] Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computationalpower of population protocols.
Distributed Comput. , 20(4):279–304, 2007.[7] James Aspnes. Clocked population protocols. In
Proc. ACM Symposium on Principlesof Distributed Computing (PODC) , pages 431–440, 2017.[8] Baruch Awerbuch. Complexity of network synchronization.
J. ACM , 32(4):804–823,1985. doi:10.1145/4221.4227 .[9] Petra Berenbrink, Robert Elsässer, Tom Friedetzky, Dominik Kaaser, Peter Kling,and Tomasz Radzik. A population protocol for exact majority with o(log5/3 n)stabilization time and theta(log n) states. In
DISC , volume 121 of
LIPIcs , pages10:1–10:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018.[10] Andreas Bilke, Colin Cooper, Robert Elsässer, and Tomasz Radzik. Brief announce-ment: Population protocols for leader election and exact majority with O (log2 n )states and O (log2 n ) convergence time. In PODC , pages 451–453. ACM, 2017.[11] Michael Blondin, Javier Esparza, and Stefan Jaax. Expressive power of broadcastconsensus protocols. In
CONCUR , volume 140 of
LIPIcs , pages 31:1–31:16. SchlossDagstuhl - Leibniz-Zentrum für Informatik, 2019.[12] Michael Blondin, Javier Esparza, and Stefan Jaax. Expressive Power of Broad-cast Consensus Protocols. In
Proceedings of the 30th International Conference onConcurrency Theory (CONCUR) , pages 31:1–31:16, 2019.2113] Olivier Bournez and Jonas Lefèvre. Population protocols on graphs: A hierarchy. In
UCNC , volume 7956 of
Lecture Notes in Computer Science , pages 31–42. Springer,2013.[14] Alejandro Cornejo and Fabian Kuhn. Deploying wireless networks with beeps. In
DISC , volume 6343 of
Lecture Notes in Computer Science , pages 148–162. Springer,2010.[15] Yuval Emek and Roger Wattenhofer. Stone age distributed computing. In
PODC ,pages 137–146. ACM, 2013.[16] Javier Esparza and Fabian Reiter. A classification of weak asynchronous modelsof distributed computing. In
CONCUR , volume 171 of
LIPIcs , pages 10:1–10:16.Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.[17] Ofer Feinerman and Amos Korman. Theoretical distributed computing meets biology:A review. In
ICDCIT , volume 7753 of
Lecture Notes in Computer Science , pages1–18. Springer, 2013.[18] Nissim Francez.
Fairness . Texts and Monographs in Computer Science. Springer,1986.[19] Rachid Guerraoui and Eric Ruppert. Names trump malice: Tiny mobile agentscan tolerate byzantine failures. In
ICALP (2) , volume 5556 of
Lecture Notes inComputer Science , pages 484–495. Springer, 2009.[20] Lauri Hella, Matti Järvisalo, Antti Kuusisto, Juhana Laurinharju, Tuomo Lempiäi-nen, Kerkko Luosto, Jukka Suomela, and Jonni Virtema. Weak models of distributedcomputing, with connections to modal logic.
Distributed Computing , 28(1):31–53,2015.[21] Adrian Kosowski and Przemyslaw Uznanski. Brief announcement: Populationprotocols are fast. In
PODC , pages 475–477. ACM, 2018.[22] Fabian Kuhn, Nancy A. Lynch, and Rotem Oshman. Distributed computation indynamic networks. In
STOC , pages 513–522. ACM, 2010.[23] Daniel Lehmann, Amir Pnueli, and Jonathan Stavi. Impartiality, justice and fairness:The ethics of concurrent termination. In
ICALP , volume 115 of
Lecture Notes inComputer Science , pages 264–277. Springer, 1981.[24] Nancy A. Lynch.
Distributed Algorithms . Morgan Kaufmann, 1996.[25] Othon Michail, Ioannis Chatzigiannakis, and Paul G. Spirakis. Mediated populationprotocols.
Theor. Comput. Sci. , 412(22):2434–2450, 2011.[26] Saket Navlakha and Ziv Bar-Joseph. Distributed information processing in biologicaland computational systems.
Commun. ACM , 58(1):94–102, 2015.2227] Fabian Reiter. Asynchronous distributed automata: A characterization of the modalmu-fragment. In
ICALP , volume 80 of
LIPIcs , pages 100:1–100:14. Schloss Dagstuhl- Leibniz-Zentrum für Informatik, 2017.[28] David Soloveichik, Matthew Cook, Erik Winfree, and Jehoshua Bruck. Computationwith finite stochastic chemical reaction networks.
Natural Computing , 7(4):615–633,2008.
A. Proofs of Section 3
Definition 6.
For every labelled graph G = ( V, E, λ ) over the finite set of Labels L we write L G for the multiset of labels occurring in G , i.e. L G : L → N , L G ( x ) = |{ v ∈ V | λ ( v ) = x }| for all labels x . We call L G the label count of G .A graph property ϕ is called a labelling property if for all labelled graphs G, G with L G = L G we have ϕ ( G ) = ϕ ( G ). In such a case we also write ϕ ( L G ) instead of ϕ ( G ). Lemma 1.
Let A be a DaF -automaton. For all graphs G and H containing a cycle, ϕ A ( G ) = ϕ A ( H ) .Proof. Assume there exist cyclic graphs G and H such that A accepts G and rejects H .We construct a graph GH and a run of A on GH such that at least one node of GH halts in an accepting state, and at least one node of GH halts in a rejecting state. Thiscontradicts the assumption that A satisfies the consistency condition.Let ρ G and ρ H be fair runs of A on G and H , and let g and h be the earliest times atwhich all nodes of G and H have already halted.Fix edges e G = { u G , v G } and e H = { u H , v H } belonging to cycles of G and H . Weconstruct the graph GH in three steps. First, we put 2 g + 1 copies of G and 2 h + 1copies of H side by side. Let G i , H i denote the i -th copy of G and H , and let w iG and w iH denote the copy of a node w G in G i or w H in H i . Second, we remove the edges { u G , v G } , ..., { u gG , v gG } , { u H , v H } , ..., { u hH , v hH } . Third, we add the edges { v G , u G } , ..., { v g − G , u gG } , { v gG , u H } , { v H , u H } , ..., { v h − H , u hH } The construction is depicted in Figure 3. Observe that, since e G and e H belong to cyclesof G and H , the graph GH is connected.Figure 3: Construction in proof of Lemma 1. The dashed edges are removed and replacedby the blue edges. Only the four red states can initially detect the change.Let ρ GH be any fair run of A on GH that during the first max { g, h } steps selectsexactly the copies of the nodes selected at the corresponding steps of ρ G and ρ H . (Noticethat ρ GH exists, because whether a run is fair or not does not depend on any finite prefixof the run.) Initially, every node of GH except u G , v gG , u H , and v hH “sees” the sameneighbourhood as its corresponding node in G or H (i.e. the same number of neighboursin the same states). Therefore, after the first step of ρ GH all nodes of GH , except possiblythese four, are in the same state as their corresponding nodes in G or H after one step23f ρ G or ρ h . Since the nodes of G g are at distance at least g from u G , v gG , u H , and v hH ,during the first g steps of ρ GH any node w gG of G g visits the same sequence of states asthe node w G of G during the first g steps of ρ G . Since all nodes of G halt after at most g steps by definition, all nodes of G g halt in accepting states. Similarly, after h steps allnodes of H h halt in rejecting states. Lemma 2.
Let A be a DAf -automaton. For all graphs G and H , if H is a covering of G , then ϕ A ( G ) = ϕ A ( H ) .Proof. Let A be a DA$ -automaton accepting ϕ . Let f : V H → V G be a covering maprespecting the labelling, i.e. fulfilling λ H = λ G ◦ f . We prove that A accepts G iff itaccepts H .Let ρ G = ( C , C , ... ) be the synchronous run of A on G , and let ρ H = ( C , C , ... )be the synchronous run of A on H . Observe that, since selection is adversarial, thesynchronous runs are fair runs. Since A satisfies the consistency condition, it sufficesto show that ρ G accepts G iff it accepts H . For this we prove by induction on t that C t ( v ) = C t ( f ( v )) holds for every node v of H and t ≥
0. For t = 0 this follows fromthe fact that f respects the labelling. For t >
0, assume C t ( v ) = C t ( f ( v )) we prove C t +1 ( v ) = C t +1 ( f ( v )). Pick an arbitrary node u . Since C t ( u ) = C t ( f ( u )), both u and f ( u ) occupy the same state in C t . Since the run is synchronous, both are selected. Sincethe restriction of the covering f to the neighbourhoods of u and f ( u ) is a bijection, and C t ( v ) = C t ( f ( v )) holds for all v , in particular for all neighbours of u , both u and f ( u )move to the same states. So C t +1 ( u ) = C t +1 ( f ( u )) Corollary 3.
Let A be a DAf -automaton deciding a labelling property. For all graphs G and H , if L H = λL G for some λ ∈ N > , then ϕ A ( G ) = ϕ A ( H ) . This also holds whenrestricting to k -degree-bounded graphs.Proof. Let L be a multiset of labels and enumerate it as L = ( λ , λ , ..., λ | L | ). Enumerate λ · L by repeating this sequence λ times. Since ϕ is a labelling property, the underlyinggraph does not influence whether the property holds. We consider the following graphs:The cycle G labelled with L in the order we established, and the cycle G labelled with λ · L in the order above. We have that G covers G , and therefore using Lemma 2 obtain ϕ ( L ) = ϕ ( λ · L ). Since the graphs G and G are 2-degree-bounded, this statement holdsalso when restricting to k-bounded-degree. Lemma 4.
Let A be a DAf -automaton with counting bound β that decides a labellingproperty. For all graphs G and H , if d L G e β +1 = d L H e β +1 then ϕ A ( G ) = ϕ A ( H ) .Proof. Let A be a DA$ -automaton with counting bound β that decides ϕ .Since ϕ is a labelling property, A accepts a graph G iff it accepts the unique clique G (up to isomorphism) such that L G = L G . Therefore, it suffices to prove ϕ ( G ) = ϕ ( H )for the case in which G and H are cliques satisfying d L G e β +1 = d L H e β +1 .Since A is an automaton with adversarial selection, the synchronous runs ρ G =( C G , C G , ... ) of A on G , and ρ H = ( C H , C H , ... ) of A on H are fair runs of A . Since A A accepts G iff ρ G is an accepting run, and similarlyfor H . So it suffices to show that ρ G is an accepting run iff ρ H is.Let Q Gt : Q → N be the mapping that assigns to each state q of A the number of nodes of G that are in state q at time t . Define Q Ht analogously. We claim: d Q Gt e β +1 = d Q Ht e β +1 for every t ≥
0. The proof is by induction on t . For the base case t = 0, let q be a state.By definition, Q G ( q ) is the number of nodes of G that are initially at state q . Let Λ q bethe set of labels that are mapped to q by the initialisation function of A . Then we have Q G ( q ) = P ‘ ∈ Λ q L G ( ‘ ). Since d L G e β +1 = d L H e β +1 , we get d Q G e β +1 = d Q H e β +1 .For the induction step, assume d Q Gt e β +1 = d Q Ht e β +1 . We prove d Q G ( t +1) e β = d Q H ( t +1) e β . It suffices to show that if two nodes u and v of G ∪ H are in the samestate at time t , then they are also in the same state (possibly a different one) at time t + 1. For this, observe first that, since G and H are cliques, all nodes of G respectively H are neighbours. So, since d Q Gt e β +1 = d Q Ht e β +1 , the nodes u and v see the sameneighbourhood up to β at time t (i.e. they see the same number of nodes in each stateup to bound β ; we go from β + 1 to β because the neighbourhood of the node doesnot contain the node itself). In other words, N C Gt u = N C Gt v holds. Since the runs ρ G and ρ H are synchronous, both u and v are selected at time t to make a move. Since N C Gt u = N C Gt v , they move to the same state, and the claim is proved.Assume that ρ G is an accepting run of A . Then there is a time t such that for every j ≥ G are at accepting states in C G ( t + j ) . By the claim, the same holds for C H ( t + j ) . So ρ H is an accepting run of A . The other direction is analogous. Lemma 5.
Let A be a dAF -automaton that decides a labelling property. There exists K ≥ such that for every graph G and H , if d L G e K = d L H e K then ϕ A ( G ) = ϕ A ( H ) .Proof. We start by repeating some notation of the proof sketch. Let A be the dAF -automaton, and let Q be its set of states. We first gather some properties of A on stargraphs . A star is a graph in which a node called the center is connected to an arbitrarynumber of nodes called the leaves , and no other edges exist. Since we consider graphs upto isomorphism, a configuration of a star graph is completely determined by the state ofthe center and the number of nodes in each state. So in the rest of the proof we assumethat a configuration of a star graph G is a pair C = ( C ctr , C sc ), where C ctr denotes thestate of the center of G , and C sc is the state count of C , i.e. the mapping that assigns toeach q ∈ Q the number C sc ( q ) of nodes of G that are in state q at C . We denote the cutoff of C as d C e m := ( C ctr , d C sc e m ).Given a configuration C of A , recall that C is rejecting if all states are rejecting. Wesay that C is stably rejecting if C can only reach configurations which are rejecting. Givenan initial configuration C , it is clear that A must reject if it can reach a stably rejectingconfiguration C from A . Conversely, if it cannot reach such a C , then A will not reject C , as there is a fair run starting at C which contains infinitely many configurationswhich are not rejecting.The statement missing in the proof sketch is the following: There exists a number m ∈ N such that a star configuration C is stably rejecting if and only if d C e m is stablyrejecting. 25o define m , we have to first define some additional concepts. Given two configurations C, D of A on stars G and H , we say that C (cid:22) D holds if (a) C ctr = D ctr , (b) C sc ≥ D sc ,and (c) D sc ( q ) = 0 implies C sc ( q ) = 0 for every q ∈ Q . It is easy to see that (cid:22) is a partialorder. Further, if C (cid:22) D then C is accepting (rejecting) iff D is accepting (rejecting). Aset C of configurations is upward closed if C ∈ C and D (cid:23) C implies D ∈ C .Let C → ∗ D denote that A can reach the configuration D from C in zero or moresteps. Given a set of configurations C , let P re ∗ ( C ) be the set of configurations C suchthat C → ∗ D for some D ∈ C . The following two claims will finally allow us to define m :(1) If C → ∗ D and C (cid:23) C , there exists D (cid:23) D such that C → ∗ D .Since C (cid:23) C , we can obtain C from C by adding leaves in states which alreadyoccur. Similar to the last argument given in the proof sketch, we let every one ofthese extra leaves copy one of the leaves from C which starts in the same state.Formally, let v new , , ..., v new ,n be the extra leaves. For every extra leaf v new ,i let v old ,i be some leaf starting in the same state, not necessarily distinct for i = i .Now let ρ = ( v , ..., v ‘ ) ∈ V ∗ denote a sequence of selections for A to go from C to D . We construct a sequence σ ∈ V ∗ by inserting a selection of v new ,i after everyselection of v old ,i , if multiple i have the same v old ,i , insert all of them after v old ,i insome order. Define D as the configuration which A reaches after executing σ from C . We claim that D is the same as D , apart from having additional leaves in thesame states as v old . This follows from a simple induction: v old ,i and v new ,i start inthe same state and see only the root node. As they are always selected without theroot being selected in between, they will remain in the same state as each other.For the centre we use the property that A cannot count: it cannot differentiatebetween seeing just v old ,i , or seeing one (or maybe more) additional nodes in thesame state.(2) For every upward-closed set of configurations C , the set P re ∗ ( C ) has finitely manyminimal configurations w.r.t. (cid:22) . We denote this finite set by M inP re ∗ ( C ).Assume for contradiction that P re ∗ ( C ) has infinitely many minimal configurations.We can enumerate its elements to obtain an infinite sequence C , C , ... of con-figurations of star graphs. By the pigeonhole principle, there exists an infinitesubsequence C i , C i , ... such that C ctr i j = q for some state q and every j ≥
0, and C sc i j ( q ) = 0 iff C sc i k ( q ) = 0 for every j, k ≥ q . By Dickson’s Lemma(for every infinite sequence of vectors v , v , ... ∈ N k , there exist two indices i < j such that v i ≤ v j with respect to the pointwise partial order), there exist j, k suchthat C i j ≤ C i k . But then C i j and C i k satisfy conditions (a)-(c) of the definition of (cid:22) , and so C i j (cid:22) C i k , which is a contradiction.Now we can define m . Consider the set C of non-rejecting configurations of A on allstar graphs, with any number of leaves. It is easy to see that C is upward closed. Bythe claim the set M inP re ∗ ( C ), i.e. the set of smallest configurations from which it ispossible to reach a non-rejecting configuration, is finite. Let m be the number of nodesof the largest star such that some configuration of it belongs to M inP re ∗ ( C ). In other26ords: for every star with more than m nodes, and for every configuration C of thisstar that can reach a non-rejecting configuration, i.e. is not stably rejecting, there is aconfiguration C ≺ C of another star that can also reach a non-rejecting configuration, i.e.is not stably rejecting. Combining this with the fact that M inP re ∗ ( C ) is upward-closed,we obtain that for every configuration C , C is not stably rejecting if and only if d C e m isnot stably rejecting. By contraposition, this implies the statement we wanted to prove. B. Proofs of Section 4
The main goal of this section as a whole is to prove that the different models with weakbroadcasts, weak absence detection as well as rendezvous transitions can be simulated.We will proceed as follows.1. We start by proving lemmata 16 and 7, which are general properties of reorderings.2. In subsection B.1 we show a general lemma concerning reorderings of three-phaseprotocols, which are used for both weak broadcasts and weak absence detection. Inparticular, we prove that there is a reordering where all nodes move in lock step.3. This will dramatically shorten the proofs that weak broadcasts and weak absencedetection can be simulated, which make up the next two subsections.4. At last, we prove that rendezvous transitions can be simulated by
DAF -automata.
Lemma 16.
Let G = ( V, E, λ ) be a labelled graph. Let π = ( C , C , ... ) denote a runon G , π f = ( C , C , ... ) a reordering of π , and v a node. For all t ∈ N where v isselected for a non-silent transition and all nodes u adjacent or identical to v we have C t ( u ) = C f ( t ) ( u ) .Proof. Write the node sequence π as ( v , v , v , ... ). Write the neighbourhood of a node v as N ( v ). Write the reordered run as π = ( C = C , C , C , ... ). We assume wlog thatall silent transitions were removed, unless no non-silent transition is enabled. The proofwill proceed by induction on t .For the induction basis t = 0 we have to prove that before time f (0), no neighbourof v has changed state yet in the reordered run. Assume for contradiction that someneighbour has changed state already, i.e. we have f ( i ) < f (0) for some i with v i ∈ N ( v ).Since i > { v i , v } ∈ E , we would have f ( i ) > f (0), contradicting the definition ofa reordering.For the induction step, let v ∈ N ( v t ). We have to prove C t ( v ) = C f ( t ) ( v ). Consider thelatest time s < t where v has been selected. We obtain C t ( v ) = C s +1 ( v ). By inductionhypothesis, we have C s ( N ( v )) = C f ( s ) ( N ( v )). This implies C s +1 ( v ) = C f ( s )+1 ( v ). Since v is a neighbour of v t , we have f ( s ) < f ( t ). We claim that v has not been selected betweentime f ( s ) and f ( t ) in the reordered run. Assume for contradiction that v has beenselected at time f ( s ) < t < f ( t ). Let m be such that f ( m ) = t , which exists because we27emoved silent transitions. Since f ( s ) < f ( m ) < f ( t ) and { v m , v t } ∈ E , we have m < t .We similarly obtain s < m . Therefore s would not have been the latest time before t where v moved, yielding a contradiction. Therefore we have C f ( s )+1 ( v ) = C f ( t ) ( v ). Lemma 7.
Let P = ( Q, Run , δ , Y, N ) denote a generalised graph protocol decidinga predicate ϕ , and P an automaton simulating P . Then there is an automaton P simulating P which also decides ϕ .Proof. For P we reuse δ as initialisation function. We change P so that each agentremembers its last non-intermediate state. We define Y as the set of states where thelast non-intermediate state is in Y , and define N analogously. Then we use Y , N asaccepting/rejecting states for P . Any run π of P starting in an initial configuration hasa reordering π f which is an extension of a run τ of P . If P accepts, then every node v isonly finitely often in a state Q \ Y in τ , which then also holds for π f and π . Thus, v willeventually remain in Y , and π accepts as well. Similarly, P will reject if P does.This lemma also explains why we treat silent transitions separately: To simulate weakbroadcasts, we use a three-phase protocol and want to prove that we can reorder the runsuch that all nodes move to phase 1, then all to phase 2 and so on. However, Lemma 16shows that if some node v observes a neighbour who is behind a phase and a neighbourwho is ahead, then this would have to be reflected at some point in time in the reorderedrun, making it impossible for all nodes to be at most one phase apart. However, in thiscase our protocols have v do nothing, so removing silent transitions resolves the issue. B.1. Reorderings in three-phase automata
As we want to make a general statement about three-phase protocols, we start by formallyintroducing the notion. The idea is that each state belongs to one of three phases, thatagents may not move directly to the previous phase, and that an agent does nothing,unless all its neighbours are in the same or the next phase. Further, we require that anagent either moves to the next phase or does nothing, if it has a neighbour in the nextphase. The last condition is rather technical, it will later allow us construct a reorderingwhich executes the transitions that do not move agents into the next phase first, beforeexecuting the other transitions.
Definition 7.
Let P = ( Q, δ , δ, Y, N ) denote an automaton with counting bound β . Wesay that P is a three-phase automaton if Q = Q ∪ Q ∪ Q , for some pairwise disjoint Q , Q , Q , and for all states q ∈ Q i and neighbourhoods N : Q → [ β ] we have1. δ ( q, N ) = q if N ( r ) > r ∈ Q i − ,2. δ ( q, N ) ∈ Q i ∪ Q i +1 , and3. δ ( q, N ) ∈ { q } ∪ Q i +1 if N ( r ) > r ∈ Q i +1 .Here, we set Q := Q and Q − := Q for convenience. We refer to states in Q i as phase- i states. 28s defined above, it is possible for a three-phase automaton to get “stuck” when someagents move to the next phase, but others cannot make progress. Our protocols will nothave this problem, so we define the following semantic constraint. Definition 8.
A three-phase automaton P = ( Q, δ , δ, Y, N ) is nonblocking if everyreachable configuration C : V → Q which has agents from at least two phases willeventually execute a transition where an agent changes phase.With these definitions we can now state the main proposition of this section. Proposition 17.
Let P = ( Q, δ ) denote a nonblocking three-phase automaton and C : V → Q an initial configuration. Then every fair run π = ( C , C , ... ) of P has areordering π f = ( C , C , ... ) which fulfils, for each step i ,1. C i ( V ) ⊆ Q j ∪ Q j +1 for some j , and at step i an agent moves to its next phase, or2. C i ( V ) ⊆ Q j for some j . The proof will take up the remainder of this section. We now fix such a P = ( Q, δ )and π = ( C , C , ... ).It will be convenient to count the total number of phases changes of a node, so wedefine the phase count pc ( v, i ) ∈ N as the smallest function which is non-decreasingw.r.t. i and has C i ( v ) ∈ Q j for all i, v and j := ( pc ( i, v ) mod 3). Intuitively, thismeans that we increment pc whenever a node moves to the next phase. Observe that pc ( v, i + 1) − pc ( v, i ) ≤
1, as a node can move at most one phase per transition.
Lemma 18.
For all adjacent nodes u, v ∈ V we have | pc ( u, i ) − pc ( v, i ) | ≤ for all i .Proof. Assume for contradiction that the statement does not hold and prick appropriate u, v, i where i is minimal and pc ( u, i ) = pc ( v, i ) −
1. Then at step i − v mustmove to the next phase, but this is prohibited by condition (2) of Definition 7.Lemma 18 implies that if one node has infinitely many phase changes, then all nodesdo. We will now show the stronger statement that if a node has m phase changes, for m ∈ N ∪ {∞} , then all other nodes have as well. Here we use that P is nonblocking, soit is not possible for nodes to become stuck in prior phases. Lemma 19. If pc is bounded, i.e. pc ( v, i ) ≤ M for some M ∈ N and all v, i , then thereare m, i ∈ N with pc ( v, j ) = m for all nodes v and j ≥ i .Proof. Assume that pc is bounded. As pc is non-decreasing we can thus find a i s.t.no node moves to another phase after step i . If C i has two nodes u, v with differentphase counts, i.e. pc ( u, i ) < pc ( v, i ), then must also be such u, v which are adjacent.Due to Lemma 18, the phase counts of u and v differ only by 1, therefore we know that C i ( u ) ∈ Q j and C i ( v ) ∈ Q j +1 for some j . As P is nonblocking, eventually an agent willmove to its next phase, contradicting our choice of i . So at step i all phase counts mustbe pairwise equal. 29emma 19 is crucial for the reordering, since in the new run of A all nodes are supposedto perform the same number of phase changes. Now we can define the reordering f . Let( v , v , v , ... ) ∈ V ω be the sequence of selections inducing run π .We define a new ordering on natural numbers by i ≤ f j ⇔ ( pc ( v i , i ) , pc ( v i , i + 1) , i ) ≤ lex ( pc ( v j , j ) , pc ( v j , j + 1) , j )where ≤ lex denotes the lexicographical ordering. The intuition is that we always executea transition from a node with the lower phase count. Amongst those, we pick one thatwill not move the node to its next phase, if possible. Finally, from the remaining choiceswe pick the one that occurred first in the original run π . Let I := { i ∈ N : C i = C i +1 } denote the indices of non-silent steps of π . The function f : I → N can now be definedas f ( i ) := |{ j ∈ I : j ≤ f i }| −
1. (We will see shortly that f is indeed well-defined.) Lemma 20. π f is a reordering.Proof. We first check that f is well-defined. For this not to be the case, we would haveto have an i with j ≤ f i for infinitely many j . In particular, there must be a node u s.t.there are infinitely many j ≤ f i with v j = u , which implies that pc ( u, j ) < pc ( v i , i + 1)for infinitely many j . Therefore pc must be bounded by Lemma 18, but then Lemma 19implies that pc ( v, j ) < pc ( v i , i + 1) can only occur for finitely many j , a contradiction.The function f is clearly a bijection. In order to show that it induces a reordering, let i, j ∈ I with i < j and v i , v j being adjacent. (Note that the transitions at steps i and j are not silent, by definition of I .) We need to show that f ( i ) < f ( j ). Assuming that f ( i ) ≥ f ( j ) holds, i.e. i ≥ f j , there are two possible cases.Case 1: pc ( v i , i ) > pc ( v j , j ). As pc ( · , t ) is non-decreasing in t and i < j , we get pc ( v i , i ) > pc ( v j , i ). Further, v i and v j are adjacent, so Lemma 18 implies that pc ( v i , i ) = pc ( v j , i ) + 1. But then, by condition (1) of Definition 7, the transition at step i must besilent, contradicting our assumption.Case 2: pc ( v i , i ) = pc ( v j , j ) and pc ( v i , i + 1) > pc ( v j , j + 1). Again, pc ( · , t ) is non-decreasing in t , so pc ( v i , j ) ≥ pc ( v i , i + 1) > pc ( v j , j + 1) ≥ pc ( v j , j ). Using Lemma 18we now get pc ( v i , j ) = pc ( v j , j ) + 1 and thus pc ( v j , j + 1) = pc ( v j , j ). So at step j node v i is one phase ahead of its neighbour v j and v j moves neither to the next phase, nordoes it perform a silent transition. This contradicts condition (3) of Definition 7.Let π := π f denote the reordered execution and define pc analogously to pc . Tocomplete the proof of Proposition 17, it suffices to show that at each step of π an agentwith the smallest phase count will be selected, and amongst those the agents that donot move to the next phase are preferred. Intuitively, this sounds reasonable, as we havedefined the reordering f in precisely this manner. We do, however, need to argue brieflythat the phase counts pc of the reordered execution correspond directly to the originalphase counts pc . Lemma 21.
Let v denote a node and let i ∈ I with v i = v . Then pc ( v, f ( i )) = pc ( v, i ) .Proof. This follows from a simple induction on i combined with Lemma 16.This concludes the proof of Proposition 17.30 .2. Simulating Weak Broadcasts Lemma 9.
Every automaton with weak broadcasts is simulated by some automatonwithout weak broadcasts of the same class. (We repeat the construction from the proof sketch for clarity.) Let P = ( Q, δ, Q B , B )denote an automaton with weak broadcasts. We will define an automaton P = ( Q , δ )simulating P . The protocol P will have three phases, and a node will move forwarda phase only if every neighbour is in the current or the next phase. Our states are Q := Q ∪ Q × { , } × Q Q . A state ( q, i, f ) means that the agent is in state q , phase i ,and currently executing broadcast with response function f . In phase 0 no broadcastsare executed, this corresponds to states Q .Let β denote the counting bound of P . To specify the transitions, for a neighbourhood N : Q → [ β ] we write N [ i ] := P q,f N (( q, i, f )) for i ∈ { , } and N [0] := P q ∈ Q N ( q )to denote the number of adjacent agents in a particular phase, and choose a function g ( N ) ∈ Q Q ∪ { (cid:3) } s.t. g ( N ) = f = (cid:3) implies N (( q, , f )) >
0, and g ( N ) = (cid:3) implies N [1] = 0. The function g is used to select which broadcast to execute, if there aremultiple possibilities. We define the following transitions for δ , for all states q ∈ Q andneighbourhoods N : Q → [ β ]. q, N δ ( q, N ) if q / ∈ Q B and N [0] = | N | (1) q, N ( q , , f ) if q ∈ Q B and N [0] = | N | , with ( q , f ) := B ( q ) (2) q, N ( f ( q ) , , f ) if g ( N ) = f = (cid:3) (3)( q, , f ) , N ( q, , f ) if N [0] = 0 (4)( q, , f ) , N q if N [1] = 0 (5)We will use the definitions and results from the previous section, Appendix B.1, wherewe have constructed a general reordering for three-phase protocols. Lemma 22.
The automaton P is a nonblocking three-phase automaton.Proof. Using Q i to denote the phase i states as defined above, it is easy to check that P isa three-phase automaton. It remains to show that P is nonblocking, so let π = ( C , C , ... )denote a fair run of P and C i : V → Q a configuration where two agents are in differentphases. We define the phase count pc as for the proof of Proposition 17, meaning that pc ( v, j ) is the number of phase changes of node v until step j .Let U := { u ∈ V : pc ( u, i ) = min v pc ( v, i ) } denote the set of nodes which have aminimal number of phase changes at step i . We know that C i has nodes of two differentphases, so U is a proper subset of V and we can pick adjacent nodes u, v with u ∈ U and v / ∈ U . We claim that the next step selecting u will move it to its next phase. This canbe seen by a simple case distinction: if u is in phase 0, 1, or 2, then transition (3), (4),or (5) will move it to the next phase, respectively. Selecting u ∈ U ensures that u hasno neighbours in the previous phase, which already suffices to enable (4) and (5), whilehaving u adjacent to v ensures that (3) can be executed.31gain, let π denote a fair run of P . Using Proposition 17 we find a specific reordering π f = ( C , C , ... ) of π . In particular, every configuration C i either has all agents in thesame phase, or it has agents in at most two phases and at step i one agent moves to theits next phase. We are now going to show that π f is an extension of a fair run τ of P ,which will complete the proof of Lemma 9.First, note that in π f there are infinitely many configurations C i where all agents arein the same phase. Due to transitions (4) and (5) it is not possible to perform a silenttransition if all agents are in phase 1 or 2, respectively. So the set I := { i ∈ N : C i ( V ) ⊆ Q } of indices i where C i has only phase 0 agents has infinitely many elements. We define themapping g : N → N as the unique bijection with g ( N ) = I which is strictly increasing,and set τ := ( K , K , ... ) where K i := C g ( i ) for all i . Lemma 23. π f is an extension of τ .Proof. Fix any i, j ∈ N with g ( i ) < j < g ( i + 1). Due to the definition of g we know that C g ( i )+1 does not contain only phase 0 agents, while C g ( i ) does. So an agent has moved tothe next phase at step g ( i ) in π f and the properties of π f guarantee us that there are t , t with g ( i ) < t , t < g ( i + 1) s.t. C t ( C t ) has only agents in phase 1 (phase 2). Moreover,we know that in π f every step t with g ( i ) ≤ t < t or t ≤ t < g ( i + 1) moves an agent toits next phase or is silent. As phases 1 and 2 consist of only intermediate states, thisimplies that C g ( i ) ∼ Q C j if j < t , C g ( i +1) ∼ Q C j if j > t , and C g ( i ) ∼ Q C j ∼ Q C g ( i +1) if t ≤ j ≤ t .The next lemma is mostly a matter of looking carefully at the definition of ourtransitions (1)-(5). Lemma 24. τ is a run of P .Proof. Fix any i ∈ N . If g ( i + 1) = g ( i ) + 1, then step g ( i ) of π f has simply executedtransition (1), which correctly performs a neighbourhood transitions for an agent notin a broadcast-initiating state. Otherwise, as we argued for Lemma 23, there is a g ( i ) < t < g ( i + 1) s.t. C t has only agents in phase 1. Let S denote the set of agentsexecuting transition (2) between steps g ( i ) and t in π f . As agent can only move fromphase 0 to phase 1 via transitions (2) and (3), and transition (3) is enabled iff a neighbour isalready in phase 1, we find that S is both nonempty and an independent set. Additionally,the definition of (2) ensures that S contains only agents in broadcast-initiating states(i.e. K i ( v ) ∈ Q B for v ∈ S ).Now we simply note that K i +1 is the result of executing a weak broadcast transitionon K i on the selection S . Transition (2) correctly perform the local update, while (3)moves the node according to some response function.Finally, we have to show that τ is fair. Lemma 25. τ is fair.Proof. There are two cases, depending on whether P uses adversarial or pseudo-stochasticscheduling. 32e start with the former. Here, π either contains infinitely many transitions wherean agent moves to the next phase, in which case τ executes infinitely many broadcastsand is fair by definition, or there is some i with K i + j = C g ( i )+ j for all j ∈ N . Thetransitions in π f after step i do not move to new phases, so they are not affected by thereordering f . In particular, as π is fair, so is τ .Now we consider the case of pseudo-stochastic scheduling. It is well-known that apseudo-stochastic schedule (i.e. every finite sequence of selections appears infinitely often)implies that every configuration C which can be reach infinitely often will be reachedinfinitely often. (This follows from there being only finitely many distinct configurationsin a run.) That argument can be strengthened to show that, for any finite sequence σ ofselections, σ will be executed starting from C infinitely often.Further note that any configuration C which appears infinitely often in τ and thus in π f can be reached infinitely often in π . This can be seen by choosing a specific reorderingwhich executes the transitions of π faithfully up to a point, and then only executes onlythe transitions necessary to reach C in π f . It is easy to see that this a valid reordering(using that π f is reordering).Now let σ denote any finite sequence of selections of P , the automaton with weakbroadcasts. We want to show that σ is infinitely often in τ . So we pick any configuration K appearing infinitely often in τ , and therefore can be reached infinitely often π . Itwould now suffice to show that, in τ , K is followed infinitely often by the selections of σ .As we know that for any sequence σ of selections of P we have that K appears infinitelyoften in π followed by the selections of σ , it now suffices to show that we can pick anappropriate σ that would lead to σ being executed in τ .To execute a selection ( n, v ) at configuration K : V → Q , i.e. a neighbourhoodtransition of node v ∈ V , we either have K ( v ) / ∈ Q B and select v , thus executingtransition (1), or we do nothing. (To be precise, in the latter case we would have toappend a copy of K to τ , and modify g s.t. τ is still an extension of π f .) For a selection( b, S ) with S ⊆ V an independent set of broadcast initiating nodes, we either have S = ∅ with S := S ∩ K − ( Q B ) and again do nothing, or we select all agents in S to movethem to phase 1 via transition (2), then move all other nodes to phase 1 via transition(3), and then use transitions (4) and (5) to move all nodes to phase 2 and then back tophase 0. B.3. Simulating Weak Absence Detection
Lemma 10.
Every
DA$ -automaton with weak absence detection is simulated by some
DAf -automaton, when restricted to bounded-degree graphs.
The proof will take up the remainder of this section.As mentioned in the main paper, we combine a three-phase protocol with a distance-labelling, the latter allowing us to propagate information about the states that have beenseen back to the agents initiating the absence detection. Before we define the necessarytransitions, we briefly characterise the distance labelling we are going to use.33 efinition 9.
Let k ∈ N . We use D to denote a set if (distance) labels , where D := Z k +1 ∪ { root } . We define increment on D by using the usual arithmetic (modulo 2 k + 1)for elements in Z k +1 , and setting root + 1 := 1 ∈ Z k +1 . We refer to root as root label .For all d ∈ D we say that d + 1 is the child label of d .Nodes initiating the absence detection use the root label. Each other node will pick achild label d of one of its neighbours, taking care that no neighbour holds a child labelof d . At this point we will use the bound on the maximum degree of the graph, whichmakes it easy to see that this is always possible. Lemma 26.
Let S ⊂ D with < | S | ≤ k . Then there is a label d ∈ D s.t. d + 1 / ∈ S andthere is some label d ∈ S with d + 1 = d .Proof. Note that the statement can be simplified to there being a d ∈ S with d + 2 / ∈ S .Pick and d ∈ S . As 2 k + 1 is odd, the sequence d, d + 2 , d + 4 , ..., d + 2 · k is pairwisedistinct. Additionally, it contains 2 k + 1 > | S | elements, and thus at least one elementnot in S . Moreover, we can thus find two subsequent elements d , d + 2 with d ∈ S , d + 2 / ∈ S in that sequence.We will now formally define our construction. Let P = ( Q, δ, Q A , A ) denote theautomaton we want to simulate, k the maximum degree of our graph. We will constructa DAf -automation P = ( Q , δ ) simulating P . As states we use Q := Q ∪ Q ∪ Q ,where Q i contains the phase i states. In particular, we set Q := Q , Q := Q × D and Q := Q × Q . For ( q, r, i ) ∈ Q , an agent v carries its phase 0 state r and a distancelabel i ∈ D . In phase 2 an agent stores the set of states that it has seen so far, whichwill be propagated to its parents.To define the transitions δ , we introduce some notation. For any neighbourhood N : Q → N we write N ( S ) := P q ∈ S N ( q ) for S ⊆ Q . We set old( N ) := N , where N ( q ) := N ( q ) + N ( Q × { q } × { , ..., k } ) is the number of agents that were in q ∈ Q in phase 0. We will use old( N ) to determine which neighbourhood transition of P toexecute.We also define a unique child label child( N ) for each N with N ( Q ) >
0. For this, let S := { d ∈ D : N ( Q × { d } ) > } denote the set of distance labels appearing in N . As wehave at most k neighbours, Lemma 26 yields a suitable choice d =: child( N ). Intuitively,this means that d is the child label of a neighbour, but no neighbour is a child of d , whichensures that we never create cycles.Finally, we write union( N ) := S { S : ( q , S ) ∈ Q , N (( q , S )) > } for the union of allstates indicated by phase 2 neighbours. Our transitions δ are now defined as follows, for34ll q, N . q, N ( q , q, root) if N ( Q ) = 0 and q := δ ( q, old( N )) ∈ Q A (1) q, N ( q , q, child( N )) if N ( Q ) = 0 and q := δ ( q, old( N )) / ∈ Q A and N ( Q ) > q, r, i ) , N ( q, union( N ) ∪ { q } ) if N ( Q ) = 0 and N ( Q × { i + 1 } ) = 0 (3)( q, S ) , N A ( q, S ) if N ( Q ) = 0 and q ∈ Q A (4)( q, S ) , N q if N ( Q ) = 0 and q / ∈ Q A (5)Transitions (1) and (2) move the agents from phase 0 to phase 1, executing a neighbour-hood transition of δ in the process (synchronously). The move is initiated by agents in Q A , which pick root as distance label, while the others wait for a neighbour to enterphase 1, at which point they become a child of that neighbour. In phase 1, each nodewaits until all children have entered phase 2 (and thus indicate the set of states they haveobserved), and then executes (3) to move to phase 2, indicating the union of all sets ofits children. Finally, the absence-detection initiating nodes move to phase 0 by executingthe absence-detection via (4), moving into the appropriate state, while (5) simply movesthe other agents to phase 0 without changing their states.It is crucial that the distance-labels assigned by transition (2) never form a cycle; elsewe would get a deadlock. Our choice of child ensures that this is the case. Lemma 27.
The automaton P cannot reach a configuration C with a cycle of nodes ( v , ..., v l ) , i.e. v = v l and v i is adjacent to v i +1 for all i , where each v i has label ( i mod 2 k + 1) ∈ Z k +1 .Proof. It is only possible for a node to receive a label d ∈ Z k +1 via transition (2). (Notethat (1) only assigns label root, which cannot be part of a cycle.) However, transition (2)will never close such a cycle, due to the definition of child.We will proceed in a similar manner as in the previous section. We want to useProposition 17 to construct our reordering, will show that P is a nonblocking three-phaseautomaton. Afterwards, we argue that the given reordering is an extension of a run of P . Lemma 28.
The automaton P is a nonblocking three-phase automaton.Proof. Again, it is easy to check that P is a three-phase automaton by inspectingtransitions (2)-(5). To show that P is nonblocking the proof is similar to the proof ofLemma 22.Let π = ( C , C , ... ) denote a fair run of P and C i : V → Q a configuration wheretwo agents are in different phases. We define the phase count pc as for the proof ofProposition 17, meaning that pc ( v, j ) is the number of phase changes of node v untilstep j .Let U := { u ∈ V : pc ( u, i ) = min v pc ( v, i ) } denote the set of nodes which have aminimal number of phase changes at step i . If all nodes in U are in phase 0 or phase 2,then selecting any node in U will move it to the next phase via transitions (1), (2) or (4),355), respectively. Otherwise, we pick any node u ∈ U and write d for the distance labelof u . As u is in phase 1, the only non-silent transition it could perform is (3). If (3) isenabled, then executing it moves u to the next phase, and we are done. If that is not thecase, there must be a node u adjacent to v , s.t. v is also in phase 1 and has label d + 1.We now set u := v and repeat this process. There are only finitely many nodes and thedistance labels form no cycles (due to Lemma 27), so this must terminate.As for weak broadcasts, let π denote a fair run of P . Using Proposition 17 we find aspecific reordering π f = ( C , C , ... ) of π . In particular, every configuration C i either hasall agents in the same phase, or it has agents in at most two phases and at step i oneagent moves to the its next phase. We are now going to show that π f is an extension ofa run τ of P .Again, note that in π f there are infinitely many configurations C i where all agentsare in the same phase. However, to construct g and the extension τ we will now haveto argue slightly differently. If all agents are in phase 1 then there must be at least oneagent where transition (3) is enabled. This follows from the same argument as used inthe proof of Lemma 28. If all agents are in phase 2, then either transition (4) or (5) willbe enabled for each agent.The set I := { i ∈ N : C i ( V ) ⊆ Q } of indices i where C i has only phase 0 agents thushas infinitely many elements, as before. We then define I := I \ { i : C i = C i − , ∃ j > i : C j = C i } by removing all steps which perform a silent transition and are followed by anon-silent transition from I .We define the mapping g : N → N as the unique bijection with g ( N ) = I which isstrictly increasing, and set τ := ( K , K , ... ) where K i := C g ( i ) for all i . Lemma 29. π f is an extension of τ .Proof. The proof is analogous to Lemma 23, as the removal of silent transitions does notaffect the notion of extension.Finally, we argue that τ is a run of P . There is no need to argue that τ is fair, as allruns of P are fair. Lemma 30. τ is a run of P .Proof. Fix any i ∈ N . If g ( i + 1) = g ( i ) + 1 then must have executed a silent transition atstep g ( i ) of π f , as else it is impossible to remain in phase 0. However, we have removedsilent transitions followed by non-silent transition from I and therefore g , so step i isfollowed by an infinite sequence of silent transitions C g ( i ) = C g ( i )+1 = ... . We know that π and thus π f are fair (w.r.t. an adversarial scheduler), so this means that transition (1)is not enabled for any node. Therefore K i ( V ) ∩ Q A = ∅ and Definition 5 states that P hangs in this case, so K i +1 = K i should hold, which is what we have.Otherwise, there is a g ( i ) < t , t < g ( i + 1) s.t. C t ( C t ) has only agents in phase 1(phase 2). First, every node executes either (1) or (2), effectively moving to configuration C with ( C ( v ) , · , · ) = C t ( v ) for v ∈ V . In particular, transitions (1) and (2) use the36hase 0 state of each agent, so this executes a synchronous neighbourhood transition (i.e.one with selection V ).Let S denote the set of agents executing transition (1) between steps g ( i ) and t in π f . A brief look at transition (1) reveals that S = ( C ) − ( Q A ) contains precisely theagents which are in absence-detection initiating states after executing the synchronousneighbourhood transition.Now we simply note that K i +1 is the result of executing a weak absence-detectiontransition on C using selection S . Here, we observe that every node v / ∈ S must pick achild label of a neighbour u in transition (2). Agent u will only execute transition (3)once v is in phase 2, so the information of v will be propagated to u , then to a parent of u ,and so on, until it reaches an agent in S . The agents in S then perform transition (4) andmove according to the weak absence-detection transition, while all other nodes execute (5)and remain in their original state. B.4. Simulating Rendez-vous Transitions
Definition 10. A graph population protocol is a tuple ( Q, δ ), where Q is a finite set of states , and δ : Q → Q is a set of transitions that describes the rendez-vous interactionsbetween two adjacent nodes. In particular, if δ ( p, q ) = ( p , q ), then we write p, q p , q .Further, let δ ( p, q ) = p and δ ( p, q ) = q be functions for the first and second componentof δ . The definitions of configurations and runs are equivalent to the ones of distributedmachines. Selections are ordered pairs of adjacent nodes, i.e. the set of possible selectionsis { ( u, v ) : { u, v } ∈ E } . If ( u, v ) ∈ V is the selection in some configuration C , then thesuccessor configuration is C with C ( u ) := δ ( C ( u ) , C ( v )), C ( v ) := δ ( C ( u ) , C ( v )) and C ( x ) := C ( x ) for all x ∈ V \ { u, v } . We require the schedules the be pseudo-stochastic,so every finite sequence of selections has to appear infinitely often in a schedule. Lemma 11.
Every graph population protocol is simulated by some
DAF -automaton.Proof.
Let P = ( Q, δ ) be a population protocol on graphs. We define a
DAF -automaton M = ( Q , δ ) that simulates P . We set the counting bound β := 2. Let Q ƒ = Q , Q (cid:252) = Q × { (cid:252) } , Q (cid:148) = Q × { (cid:148) } and Q ¸ = Q × { ¸ } × Q . We define Q := Q ƒ ∪ Q (cid:252) ∪ Q (cid:148) ∪ Q ¸ .Intuitively, each node stores its state in the original protocol in the first componentand has a status that helps to simulate rendez-vous transitions. The status can be“waiting” ( ƒ ), “searching” ( (cid:252) ), “answering” ( (cid:148) ) or “confirming” ( ¸ ) and initially everynode is waiting. Additionally, a confirming node stores the state it would have afterthe rendez-vous transition is completed. This is necessary, because the other node willperforms its part of the rendez-vous interaction first.Now will define the transition function δ of M , for states q, q , q ∈ Q and neighbour-hood N ∈ [ β ] Q . Let N ( ƒ ) := P q ∈ Q N ( q ) be the number of detectable waiting neighbours.Further, we use the auxiliary function f ( N ) to denote the unique non-waiting neighbour,if any, i.e. we set f ( N ) := x if N ( ƒ ) = | N | − N ( x ) = 1. If no such neighbour exists,we set f ( N ) := ƒ if N [ ƒ ] = | N | (all neighbours are waiting), and f ( N ) := ⊥ otherwise.Figure 4 contains the formal definition of δ and a diagram that visualises how nodeschange their status. 37 , N ( q, (cid:252) ) for f ( N ) = ƒ q, N ( q, (cid:148) ) for f ( N ) = ( q , (cid:252) )( q, (cid:252) ) , N ( q, ¸ , δ ( q, q )) for f ( N ) = ( q , (cid:148) )( q, (cid:148) ) , N δ ( q , q ) for f ( N ) = ( q , ¸ , q )( q, ¸ , q ) , N q for f ( N ) = ƒ Figure 4: Neighbourhood transitions that simulate rendez-vous interactions. The leftside show the formal definition of the transition function δ . For all inputs x, N where δ ( x, N ) is undefined, the status is set to waiting ( ƒ ) by changing to theoriginal state saved in the first component of x . The right side visualises theneighbourhood transitions as a graph. States in the diagram only show thestatus of a node. A node only change its status by following an edges in thediagram, if its neighbourhood satisfies the condition on the edge. If a node isselected and no edge can be followed, it instead changes its status to waiting( ƒ ). The edges that apply the rendez-vous transition δ are drawn dashed.Intuitively, the simulation of a rendez-vous transition p, q p , q starts with a waiting( ƒ ) agent with original state p that only sees waiting nodes. This agent searches fora partner by changing its status to (cid:252) . Then, its waiting neighbours can answer bychanging to (cid:148) if they detect exactly one search. If the searching agent detects exactlyone answer, it confirms by changing to ¸ while remembering the state it would haveafter the rendez-vous with the answering node. If the answering node with original state q sees exactly one confirmation, it applies the state change ( q to q ) and waits. Then,the confirming node detects that the answering node is now waiting and applies thestate change it remembered ( p to p ). However, once a node detects an irregularity inthe simulation (e.g. more than one non-waiting neighbour) it cancels the interaction bychanging its state to ƒ .We still have to show that M simulates P . We call a change to the original stateof a node a state change . In other words, neighbourhood transitions that change thefirst component of a nodes state perform a state change. We will now argue, that statechanges only occur in pairs and that they simulate the rendez-vous transitions in δ . Firstnote, that from a configuration C where two nodes u, v and their neighbours are waiting,scheduling the sequence u, v, u, v, u correctly applies the state changes δ ( C ( u ) , C ( v )) for u and δ ( C ( u ) , C ( v )) for v . For a node u to enter the confirming state, it must haveexactly one answering neighbour v and all other neighbours must be waiting. u cannotperform its state change before v because it needs to wait until all nodes are waiting.Once the answering agent v performs its state change, all of u ’s neighbours are waitingand cannot change their status because they see that u is confirming. Thus, the next time u is scheduled, it must perform its state change. Further, v can only perform the statechange if it sees exactly one confirming state and all other of v ’s neighbours are waiting.Thus, once one of nodes in the rendez-vous interaction performs the state change, the38ull rendez-vous interaction will be performed. Further, it is impossible for more thantwo nodes to interact simultaneously, because selection is exclusive and whenever a nodedetects more than one non-waiting neighbour, it cancels the interaction and waits.Next, we need to reorder a given run π of M such that it is an extension of some run π in P . For this, we make sure that after an answering node performs the state change,the corresponding confirming node is scheduled immediately so that it can performits state change. Intuitively, the reordering makes sure that the state changes in thesimulation of two different rendez-vous transitions are not executed in an interleavingmanner. Thus, the reordered run is indeed an extension of a run in P where the statechanges of rendez-vous interactions happen atomically. The reordering is valid, becauseafter the answering node performs the state change, the nodes in the neighbourhood ofthe confirming node are all waiting and they cannot change their status because they seea confirming node. Thus, scheduling the confirming agent earlier in the reordered rundoes not interfere with the neighbourhood transitions that were executed between thetwo state changes in π .Lastly, we need to argue about fairness. Let π be the simulated run of P for some fairrun π of M . Let C be some configuration that is visited infinitely often in π . Further,let S = ( u , v ) , · · · , ( u k , v k ) ∈ ( V × V ) ∗ be a finite sequence of selections such thatscheduling S in C leads to come configuration C f . As C is visited infinitely often, thereare infinitely many configurations C , C , · · · ∈ π with C ∼ Q C i for all i >
0. Becausethere are only finitely many different configurations for a given graph, there is at leastone configuration C that is visited infinitely often in π such that C ∼ Q C . C can reacha configuration C ∼ Q C where all nodes are waiting by scheduling all confirming nodes,then all answering nodes and lastly all searching nodes. C can reach a configuration C f ∼ q C f by simulating all selections ( u i , v i ) of S one after the other by scheduling u i , v i , u i , v i , u i . Because π is fair, C f is visited infinitely often in π . Therefore, C f isvisited infinitely often in π and π is fair. C. Proofs of Section 5
As mentioned in the introduction, we are interested in labelling properties.
Definition 11.
For every labelled graph G = ( V, E, λ ) over the finite set of Labels L wewrite L G for the multiset of labels occurring in G , i.e. L G : L → N , L G ( x ) = |{ v ∈ V : λ ( v ) = x }| for all labels x . We call L G the label count of G .A graph property ϕ is called a labelling property if for all labelled graphs G, G with L G = L G we have ϕ ( G ) = ϕ ( G ). In such a case we also write ϕ ( L G ) instead of ϕ ( G ).In this section, we will use L for multisets of labels. C.1.
DaF only decides trivial properties
Proposition 31.
Let ϕ be a labelling property decided by a DaF -automaton in theunrestricted set of graphs or in the set of k -degree-bounded graphs. Then ϕ is trivial, i.e.either always false or always true. roof. Assume that ϕ is not always false, i.e. ϕ ( L ) = 1 for some L . We have to provethat ϕ is always true, i.e. ϕ ( L ) = 1 for all labelling multisets L . Let L be anylabelling multiset. By our general assumption, network graphs have at least 3 nodes,i.e. | L | , | L | ≥
3. Since ϕ is a labelling property, we can choose the underlying graph.Let G be the cycle with | L | nodes labelled with L , and let G be the cycle with | L | nodes labelled with L . By Lemma 1, we cannot distinguish G and G and thereforehave ϕ ( L ) = ϕ ( L ) = 1 as claimed. This also holds in the k -degree-bounded case sincethe graph constructed in the proof of Lemma 1 is k -degree-bounded, if both G and G were. C.2.
DAf decides at most
Cutoff (1)
Proposition 32.
Let ϕ be a labelling property decided by a DAf -automaton. Then ϕ ∈ Cutoff (1) .Proof.
Let A be a DAf -automaton with ϕ A = ϕ . Let K = β + 1 be as in Lemma 4, i.e.the natural number such that ϕ ( L ) = ϕ ( d L e K ) for all labelling multisets L . In addition,we know that ϕ is closed under scalar multiplication by Corollary 3. We use this corollarywith λ = K to scale up L with the factor K , then cut if off at K and scale down again.Formally, we start by proving d λ · L e λ = λ · d L e = d λ · d L e e λ for all λ ∈ N by casedistinction, namely if some label x occurs, then all those three functions equal λ , andotherwise they all equal 0.We use this to obtain the following chain of equalities: ϕ ( L ) C3 = ϕ ( K · L ) L4 = ϕ ( d K · L e K ) = ϕ ( K ·d L e ) = ϕ ( d K ·d L e e K ) L4 = ϕ ( K ·d L e ) C3 = ϕ ( d L e ) . C.3. dAf can decide
Cutoff (1)
Proposition 33. dAf -automata can decide all labelling properties ϕ ∈ Cutoff (1) .Proof.
Let ϕ ∈ Cutoff (1). Let x , ..., x n be the variables occurring in ϕ . Then ϕ corresponds to a subset M in { , } { ,...,n } , describing whether we accept if exactly thevariables with indices i dAf -automata can decide thelanguage B of graphs with a black node, i.e. the labelling predicate ϕ ( x, y ) ⇔ x ≥ . Onthe level of subsets M , this means the set M i := { f : { , ..., n } → { , } : f ( i ) = 1 } of all functions with i M via unions,complements and intersections of sets M i . This corresponds to writing ϕ as a booleancombination of x i ≥
1, which can be decided.40 .4. dAF can decide exactly
Cutoff
Lemma 34.
For every property ϕ : N l → { , } with ϕ ( x, y , ..., y l − ) ⇔ x ≥ k for some k ∈ N there is a dAF -automaton deciding ϕ .Proof. We construct a dAF -automaton P = ( Q, ∅ , I, O ), which we will augment with weakbroadcast transitions. As states we use Q := { , , ..., k } , the input mapping is given by I ( x ) := 1 and I ( y ) = ... = I ( y l − ) = 0, and the set of accepting states is O := { k } . Weadd the following broadcasts, with i = 1 , ..., k − i i, { i i + 1 } 〈 level 〉 k k, { q k : q ∈ Q } 〈 accept 〉Using Lemma 9 we get an equivalent dAF -automaton.Let C denote an initial configuration with c := | C − (1) | set to the number of agentsstarting in state 1. It is easy to see that C is accepting iff it can reach a configuration C with k ∈ C ( V ), i.e. at least one agent in state k . Of course, 〈 accept 〉 cannot be used toreach k as the initiator is already in state k , so we now consider only configurations C reachable by 〈 level 〉.It is only possible for an agent to go from state i to i + 1 by receiving broadcast 〈 level 〉initiated by an agent in state i , for i = 1 , ..., k −
1. The initiator remains in state i , so wehave that i + 1 ∈ C ( V ) implies i ∈ C ( V ). Therefore k ∈ C ( V ) implies { , , ..., k } ⊆ C ( V )and thus at least k agents have started in state 1, i.e. c ≥ k , as it is not possible to leavestate 0 via 〈 level 〉.To summarise, the protocol accepts only initial configuration which should be accepted.It remains to show that the converse holds as well, so we require c ≥ k and set C to anarbitrary configuration reachable from C with only 〈 level 〉. We have pseudo-stochasticfairness, so it is enough to show that C can reach a configuration C with k ∈ C ( V ).Let m i := | C − ( i ) | denote the number of agents in state i , for i = 1 , ..., k . Wedefine an ordering on the set of configurations by ordering the tuples ( m k , m k − , ..., m )lexicographically. If C does not have a node in state k , then it has c ≥ k occupying states1 to k −
1, i.e. m + ... + m k − = k and, by pigeonhole principle, there is some 1 ≤ j < k with m j ≥
2. By executing transition 〈 level 〉 on one of those agents exclusively, at leastone agent moves to state j + 1. Hence the resulting configuration is strictly larger w.r.t.our ordering. There are only finitely many configurations with n agents, so we can repeatthis procedure until at least one agent has state k , thereby proving that C reaches someaccepting configuration. Proposition 35.
The set of labelling properties decided by dAF -automata is precisely
Cutoff .Proof.
By Lemma 5, the expressive power is contained in
Cutoff .Now let ϕ ∈ Cutoff . Let K ∈ N be as in the definition of Cutoff . Let x , ..., x n be thevariables occurring in ϕ . Then ϕ corresponds to a M ∈ [ K ] { ,...,n } of accepted cutoffs. Ifwe can decide all formulas corresponding to 1-element subsets of { , , ..., K } { ,...,n } , thenwe can decide ϕ , since ϕ can be written as a disjunction of such formulas. Let M be such41 1-element subset, write this element as f : { , ..., n } → { , , ..., K } . Let S ⊆ { , ..., n } be the set of indices i with f ( i ) = K . The formula corresponding to M is ^ i/ ∈ S ( x i ≥ f ( i ) ∧ ¬ ( x i ≥ f ( i ) + 1)) ∧ ^ i ∈ S ( x i ≥ f ( i )) , which can be decided since by Lemma 34, we can compute x i ≥ f ( i ), and the set ofdecidable properties is closed under boolean combinations. C.5.
DAF can decide exactly the labelling properties in NL Lemma 12.
DAF -automata decide exactly the labelling properties in NL .Proof. As argued in the main paper,
DAF -automata can decide at most the predicates in NL . We now restate the construction from the proof sketch for clarity.Let P = ( Q, δ, I, O ) denote an arbitrary strong broadcast protocol deciding a predicate ϕ . It is known that strong broadcast protocols decide exactly the predicates in NL [12,Theorem 15].We start with the graph population protocol P token := ( Q token , δ token ), with states Q token := { , L, L , ⊥} and rendez-vous transitions δ token given by( L, L ) (0 , ⊥ ) , (0 , L ) ( L, , ( L, ( L ,
0) 〈 token 〉Using Lemma 11 we construct a
DAF -automaton P token = ( Q token , δ token ) simulating P token .Agents in states L, L have a token , while ⊥ is an error state. We want to combine theabove with P , so we set P step := P token × Q + 〈 step 〉, where 〈 step 〉 is a weak broadcastdefined as( L , q ) ( L, q ) , { ( t, r ) ( t, f ( r )) : ( t, r ) ∈ Q token × Q } 〈 step 〉for each broadcast q q , f in δ . This yields a DAF -computation P step = ( Q step , δ step )simulating P step via Lemma 9. Crucially, a broadcast is only initiated by an agent in astate ( L , · ). We use two states ( L and L ) to ensure that an agent can initiate either abroadcast or a (simulated) rendez-vous transition, but not both.If we start the computation with more than one token, they will eventually meet usingtransition 〈 token 〉 and an agent will move into the error state ⊥ . Afterwards, we wantto restart the computation, now with fewer agents in state ( L, · ). So we again add anadditional component to each state and consider the protocol P reset := P step × Q + 〈 reset 〉,where 〈 reset 〉 are the following broadcast transitions, for each q, q ∈ Q .(( ⊥ , q ) , q ) (( L, q ) , q ) , { ( r, r ) ((0 , r ) , r ) : r ∈ Q step , r ∈ Q } 〈 reset 〉For P reset we also define the input mapping as I reset ( x ) := (( L, I ( x )) , I ( x )) and the setof accepting states as O reset := { (( r, q ) , q ) : q ∈ O, q ∈ Q, r ∈ { , L }} . Using Lemma 9(and Lemma 7) we get a DAF -protocol equivalent to P reset , so it suffices to show that P reset is equivalent to P . 42s a technical aide to state the proof, we introduce another graph population protocol P ∗ token := ( Q token , δ ∗ token ), where δ ∗ token := { ( L, L ) (0 , ⊥ ) , (0 , L ) ( L, } . Essentially,we want to ignore the difference between L and L . For this, we consider the mapping g : Q token → Q token , which maps g ( L ) := L and all other states to themselves. Weextend g to Q step by applying it only to the first component, i.e. g (( q, r )) := g ( q ) for( q, r ) ∈ Q step , and then to configurations C : V → Q step and runs π of P step in the obviousmanner.Let π denote a fair run of P step . If we only consider the first component of the states in π , this is essentially a run of P token , except that at some steps an agent transitions from L to L using 〈 step 〉. It is not guaranteed that a simulation continues to work under theseconditions, but the construction of Lemma 11 does not rely on the non-intermediatestates remaining unchanged between transitions. This means that g ( π ) has a reorderingwhich is an extension of a fair run of P ∗ token .Let C ∗ : V → Q denote an initial configuration of P , and let π := ( C , C , ... ) denote arun of P step , starting in a configuration C with C ( v ) = (0 , C ∗ ( v )) or C ( v ) = ( L, C ∗ ( v ))for all nodes v .If C has k > π will reach a configuration with an agent inan error state, and the set S := S i C − i ( Q × ⊥ ) of agents to ever reach an error state hassize at most k −
1. We will refer to this property as A ( π ). Crucially, if A ( π ) holds for anyrun π , then it also holds for any reordering of π , and it also holds for any extension of π .As we argued before, g ( π ) is a reordering of an extension of a fair run of P ∗ token .Moreover, this projection does not affect A . So it is sufficient to show A ( τ ) for all fairruns τ of P ∗ token , which follows immediately from its definitions.Let π := ( C , C , ... ) denote a fair run of P reset with initial configuration C definedsimilar to C , so C ( v ) = ((0 , C ∗ ( v )) , C ∗ ( v )) or C ( v ) = (( L, C ∗ ( v )) , C ∗ ( v )) for all nodes v , and the latter holds for exactly k > P reset always have k = n .) As A ( π ) holds for all fair runs π of P step as defined above, we findthat a fair run π where 〈 reset 〉 is never executed will necessarily enable it once. But, asper Definition 4, all states (( ⊥ , · ) , · ) are broadcast-initiating, so they cannot execute aneighbourhood transition and can only change their state via 〈 reset 〉. This has to occureventually, so let step i denote the first execution of 〈 reset 〉.Again, due to A , before step i the number of agents in a state in ( {⊥} × Q ) × Q isat most k −
1. So we get C i = ((0 , C ( v )) , C ( v )) or C i ( v ) = (( L, C ( v )) , C ( v )) for allnodes v , and the latter holds for at most k − reset 〉.) By induction, we eventually find asuffix of the run starting in a configuration C as defined above with k = 1, i.e. exactlyone agent is holding a token.As we argued for A , no agent can ever reach an error state from such an initialconfiguration, so transition 〈 reset 〉 will never be executed and it suffices to show thatany fair run π = ( C , C , ... ) of P step stabilises to the correct consensus, where C ( v ) =( L, C ∗ ( v )) for some node v and C ( u ) = (0 , C ∗ ( u )) for all other nodes u = v .First, we argue that there is always at most one agent holding a token. Again, applying g yields a reordering of an extension of a run of P ∗ token , and it is clear that the property43olds for any run of P ∗ token and any extension τ = ( K , K , ... ) of such a run (with initialconfiguration K defined s.t. C ( v ) ∈ { K ( v ) } × Q for all v ). However, we still need toshow that any reordering τ f of τ also fulfils the property. It is only possible for a node v to receive a token by moving from state 0 or an intermediate state to L . If this happens,say, at step i in τ , then v or an adjacent node u must have left { L, L } at a step j directlybefore i , i.e. a step j < i s.t. in configurations K j +1 , K j +2 , ..., K i − there are no agentswith a token. For the reordering we then have f ( j ) < f ( j ), so the token leaves u beforeentering v as well in τ f .This means that transition 〈 step 〉 cannot be executed by multiple agents simultaneouslyand it thus updates the states in the same manner as in P . Finally, it remains to showthat π does so in a pseudo-stochastic manner, for which it is sufficient to prove thatany configuration C i after executing 〈 step 〉 can, for any node v , reach a configuration C where v has the token without executing 〈 step 〉.Intuitively, this clearly holds, based on transitions 〈 token 〉. To make this formallyprecise, we reference the specific construction of Lemma 11. Starting with C i we repeatedlyselect agents in intermediate states (and execute the corresponding transition) until noneare left. This will never select the (unique) node v with C i ( v ) = ( L, · ), and it willterminate, due to the transitions of Lemma 11. Thus we reach a configuration C with C ( v ) = ( L, · ) and all other agents u = v have C ( u ) = (0 , · ). (We have already arguedthat it is not possible to reach a state with more than one token.) As C contains nointermediate states, it is easy to see that there is a sequence of neighbourhood transitionsto move the token to any node. D. Proofs of Section 6
D.1. dAf can only decide
Cutoff (1)
Proposition 36.
The set of labelling properties decided by dAf -automata in the k -degree-bounded case for k ≥ is precisely Cutoff (1) .Proof.
We know that
Cutoff (1) is contained in the expressive power of dAF for k -degree-bounded graphs, since we can compute predicates in Cutoff (1) even in the unrestrictedset of graphs.Now let ϕ be a property decided by some dAf -automaton M . We claim that for everymultiset L and every label x with L ( x ) ≥ ϕ ( L ) = ϕ ( L + x ).Proof of claim: since ϕ is a labelling property, we can choose the underlying graph. Let G = ( V, E, λ ) be a line labelled with the set L and the label x on the first end. Definethe graph G = ( V , E , λ ) by copying G and adding a extra node, which is labelledwith x and connected to the second node only. Since M is consistent, M accepts G ifand only if the synchronous run ρ on G is accepting and it accepts G if and only if thesynchronous run ρ on G is accepting. It follows by induction that every node of thegraph G is always in the same state in both runs, and that the extra node is always inthe same state as the first end. 44his shows that ρ is accepting if and only if ρ is accepting. Therefore G is accepted ifand only if G is accepted, proving the claim.Now we use the claim to prove the proposition. Let L be some multiset. We have toprove that ϕ ( L ) = ϕ ( d L e ). For this, we write L = d L e + x + ... + x n with x i ( d L e ) ≥ n times. D.2. dAF and
DAF decide exactly the labelling properties in
NSPACE ( n ) Proposition 37.
A labelling property ϕ can be decided by a dAF -automaton in the k -degree-bounded case if and only if ϕ ∈ NSPACE ( n ) .A labelling property ϕ can be decided by a DAF -automaton in the k -degree-bounded caseif and only if ϕ ∈ NSPACE ( n ) .Proof. By [16, Proposition 22], the expressive power of dAF -automata is equal to theexpressive power of
DAF -automata in the k -degree-bounded case. It is therefore enoughto consider DAF .We start by proving that labelling properties ϕ ∈ NSPACE ( n ) can be decided. By[13], when restricting to k -degree-bounded graphs, graph population protocols candecide all symmetric properties ϕ ∈ NSPACE ( n ), in particular all labelling properties ϕ ∈ NSPACE ( n ), since they are by definition invariant under rearranging the labels. ByLemma 11, all properties decidable by graph population protocols can also be decided by DAF -automata.Now let ϕ be a labelling property decided by a DAF -automaton M with counting bound β . We have to prove that ϕ can be decided by a non-deterministic Turing machine withlinear space. Since every node uses constant space and we have a linear number of nodes,a Turing machine with linear space can save configurations of our automaton M . Weclaim that checking whether two configurations C, C fulfil C → C can be checked in NSPACE ( n ). For this, the Turing machine guesses for each node whether it has to beselected or not, and then checks for every node v whether C ( v ) = C ( v ) if v / ∈ Sδ ( C ( v ) , d C ( N ( v )) e β ) = C ( v ) if v ∈ S, i.e. the definition of the semantics. Since C → C can be checked in NSPACE ( n ), C → ∗ C also can. For this, the Turing Machine does the following ( | Q | ) | V | times (upper bound onnumber of configurations): guess a configuration C and check C → C . Overwrite C with C . If C = C accept, if we finish the loop without this occurring reject.Now we use Immerman–Szelepcsényi theorem in the general version to obtain that C ∗ C can also be checked in NSPACE ( n ). Due to the automaton M using pseudo-stochastic fairness, we accept from some initial configuration C if and only if there existsa configuration C fulfilling the following three conditions:1. C → ∗ C .2. C is accepting. 45. For all non-accepting configurations C , we have C ∗ C .We can check this in NSPACE ( n ) by guessing the configuration C and checking thereachability conditions as described above. D.3.
DAf can decide majority
The proof of Lemma 13 will use the following lemma, which encapsulates the mainargument.
Lemma 38.
Let π = ( C , C , ... ) denote a fair run of P cancel . There are only finitelymany C i with C i ( V ) ∩ { k + 1 , ..., A } 6 = ∅ and C i ( V ) ∩ {− A, ..., } 6 = ∅ .Proof. First, we note C i ( V ) ⊆ S ⇒ C i +1 ( V ) ⊆ S for S = { , ..., A } , S = { , ..., A } , and S = {− A, ..., k } . In particular, the latter two imply that it suffices to show that thereexists an i with C i ( V ) ∩ { k + 1 , ..., A } = ∅ or C i ( V ) ∩ {− A, ..., } = ∅ .Our proof will proceed by first showing that C i ( V ) ∩ { k + 1 , ..., A } 6 = ∅ and C i ( V ) ∩{− A, ..., − } 6 = ∅ cannot both hold for all i . Afterwards, we will argue that C i ( V ) ∩ { k +1 , ..., A } 6 = ∅ and 0 ∈ C i ( V ) also cannot always hold, thus completing the proof.Assume C i ( V ) ∩ { k + 1 , ..., A } 6 = ∅ and C i ( V ) ∩ {− A, ..., − } 6 = ∅ for all i . We fix an i ,and let S ( C i ) := C − i ( {− A, ..., } ) denote the set of agents with nonpositive contributionin C i . Due to our assumption, S ( C i ) is nonempty. We write S d ( C i ) for the set of nodeswith distance d to S ( C i ), for d = 1 , ..., n , and define λ d ( C i ) := P v ∈ S d ( C i ) C i ( v ) as thesum of contributions of S d . Finally, we set λ ( C i ) := ( λ ( C i ) , ..., λ n ( C i )).We now claim that λ ( C i ) < λ ( C i +1 ) for each i , using lexicographical ordering, whichis a contradiction, as there are only finitely many different configurations. To showthe claim, we split the transition from C i to C i +1 into a set of pairwise transactions U ⊆ V × V , s.t. C i +1 ( v ) = C i ( v ) − | U ∩ { v } × V | + | U ∩ V × { v }| for each node v ,and C i ( u ) > k ∧ C i ( v ) ≤ k or C i ( u ) ≥ − k ∧ C i ( v ) < − k for all ( u, v ) ∈ U . Intuitively,( u, v ) ∈ U means that u sends one unit to v .We always have C i ( u ) > C i ( v ) for ( u, v ) ∈ U , so λ ( C i ) ≤ λ ( C i +1 ). If there existadjacent nodes u, v ∈ V with C i ( u ) < < C i ( v ) and ( v, u ) ∈ U then we have λ ( C i ) <λ ( C i +1 ) and our claim follows. Hence we will now exclude this case. In particular, wethereby exclude the possibility of a node v leaving S , i.e. v ∈ S ( C i ) \ S ( C i +1 ).Let U + := { ( u, v ) ∈ U : u, v / ∈ S ( C i ) } denote the set of transitions where neither nodehas negative contribution. We pick d, u where u ∈ S d and C i ( u ) > k s.t. d is minimal.It is clear that d > u to a node in S . There is some node v adjacent to u in S d − which, by choice of u , fulfils C i ( u ) ≤ k .Therefore ( u, v ) ∈ U + . In particular, u sends one unit to v , thereby increasing S d − .Moreover, all transactions ( u , v ) ∈ U + have C i ( u ) > k , so it is not possible for anysuch transaction to decrease any S d with d < d . Neither can such a transaction change S . Therefore we find that the transactions in U + strictly increase λ , without affecting S .Let C denote the configuration where the transaction in U + have been executed, i.e. C ( v ) := C i ( v ) − | U + ∩ { v } × V | + | U + ∩ V × { v }| . From the above considerations we46et λ ( C ) > λ ( C i ) and S ( C ) = S ( C i ). The transactions in U ∩ S ( C ) × S ( C ) do notchange λ and do not affect S (a node could go from − k − −
1, but not further), sowe now set C to the configuration after executing those.Finally, consider a transaction ( u, v ) ∈ U with v ∈ S ( C ) ⊆ S ( C ) and u / ∈ S ( C ).If C ( u ) >
0, then the transactions would strictly increase λ , without changing S . If C ( u ) <
0, then neither λ nor S would change. Otherwise, the contribution of u becomes −
1, in which case u would enter S , and λ would remain unchanged. As a consequenceof u entering S , the distance between some other nodes and S might decrease, but thatcan only increase λ , as all nodes outside of S have nonnegative contribution. We cannow proceed inductively, by updating C corresponding to ( u, v ).This concludes the first part of the proof. It remains to argue that C i ( V ) ∩ { k +1 , ..., A } 6 = ∅ and 0 ∈ C i ( V ) cannot hold for all i . We argue analogously to before andassume the contrary. Then we set S ( C i ) := C − i (0) and define S d , λ d and λ as before. Itis not possible for a node to enter S , so a node can leave S only finitely often. Choosingan i large enough, the set S thus does not change. Finally, we again pick d, u where u ∈ S d and C i ( u ) > k s.t. d is minimal, and see that λ d − and thus λ must increase ateach step, which is a contradiction. Lemma 13.
Let π = ( C , C , ... ) denote a run of P cancel with P v C ( v ) < . Then thereexists an i s.t. either all configurations C i , C i +1 , ... only have states in {− A, ..., − } , orthey only have states {− k, ..., k } .Proof. First, note that it suffices to show the claim for a single i , as C i ( V ) ⊆ {− A, ..., − } implies C i +1 ( V ) ⊆ {− A, ..., − } , and C i ( V ) ⊆ {− k, ..., k } even implies C i +1 = C i .This then follows from Lemma 38 together with the following observation: 〈 cancel 〉is symmetric w.r.t. negation of all contributions, hence we could flip all signs, applyLemma 38, and derive the statement that there are only finitely many C i with C i ( V ) ∩{− A, ..., − k − } 6 = ∅ and C i ( V ) ∩ { , ..., A } 6 = ∅ .As 0 > P v C ( v ) = P v C ( v ) = ... , it is impossible that C i ( V ) ∩ {− A, ..., } is empty,for any i . Hence Lemma 38 yields that we eventually have C i ( V ) ∩ { k + 1 , ..., A } = ∅ for all sufficiently large i . Combining this with the above observation we get the desiredstatement. Lemma 14.
Assuming that no agents enters state ⊥ , π is accepting iff ϕ ( L G ) = 1 .Additionally, π cannot reach a configuration with all leaders in state ⊥ . We split the proof into two parts, Lemmata 39 and 40.
Lemma 39.
Assuming that no agents enters state ⊥ , π is accepting iff ϕ ( L G ) = 1 .Proof. If no weak broadcast is executed in π , then the computation is necessarily accepting( (cid:3) is only reachable via 〈 reject 〉), so we can assume that ϕ ( C ϕ ) = 0. Additionally, weknow that π is a run of P detect as well, which simulates P detect , so there is a run τ of P detect s.t. π is a reordering of an extension of τ . As 〈 detect 〉 does not affect the first component,we get a run σ = ( K , K , ... ) of P cancel by projecting τ onto the first component. Due to ϕ ( C ϕ ) = 0, Lemma 13 implies that any run of P cancel starting at K would eventually47ave only states in {− k, ..., k } , or only states in {− A, ..., − } . In both cases, executing〈 detect 〉 would move a leader from L to L double or L (cid:3) .In run π , it is not possible for a leader to leave state L double or L (cid:3) , as these states arebroadcast initiating. This contradicts the weak fairness condition, as then either 〈 double 〉or 〈 reject 〉 must be executed eventually.Therefore, let i denote the first step at which a weak broadcast is executed in π (i.e.〈 double 〉 and/or 〈 reject 〉), and M ⊆ V the set of its initiators. If there is a leader v / ∈ M ,then it cannot be in state ⊥ , due to our assumption, nor can it be in (cid:3) , as 〈 reject 〉 hasnot been executed before step i . But then v would move to state ⊥ in step i , whichcannot happen by assumption. Hence M is precisely the set of leaders.If both 〈 double 〉 and 〈 reject 〉 are executed at step i , i.e. ( · , L double ) , ( · , L (cid:3) ) ∈ C i ( M ),then C i +1 has all leaders in state L or (cid:3) , with at least one in each. Additionally, C i +1 isa valid input configuration of P detect (it does not contain any intermediate states added in P detect ). Any fair run τ of P detect starting in C i +1 has one leader v which starts in state L and moves to ⊥ upon the first execution of 〈 detect 〉 as there is an agent in (cid:3) . So v entersneither L double nor L (cid:3) in τ . Therefore, until the second broadcast is executed at step j > i in π , we have C ( v ) / ∈ Q × { L double , L (cid:3) } for any configuration C ∈ { C i +1 , ..., C j } . If j = ∞ , then v moves eventually to ⊥ in π , as it does in τ , otherwise either 〈 double 〉 or〈 reject 〉 move v immediately to ⊥ . In both cases, our assumption is violated, so at step i we cannot execute both 〈 double 〉 and 〈 reject 〉.Now there are two cases. If we execute only 〈 reject 〉 at step i of π , we know that C i ( v ) = ( · , L (cid:3) ) for any leader v . This is only possible if 〈 detect 〉 moves all leaders to L (cid:3) at once, so at some point a configuration in π had only states in last − ( {− A, ..., − }× Q L ),which neither 〈 detect 〉 nor a transition of P cancel can change. In particular, this means C i ( V ) ⊆ last − ( {− A, ..., − } × Q L ), so 〈 reject 〉 would move all agents (including theleaders) to (cid:3) . At that point, no further transitions can be performed and the protocolmoves into a stable 0-consensus. This is correct, as it is only possible for P cancel to moveall agents to states {− A, ..., − } if the sum of all contributions in C is negative.The second case is executing only 〈 double 〉 at step i of π . Similarly, this is only possibleif all leaders move to L double at once using 〈 detect 〉. For that to happen, all agents mustbe in states {− k, ..., k } × { , L } before executing 〈 detect 〉, moving the leaders to L double .It is not possible to execute 〈 detect 〉 or any transition of P detect with only these states,so we get C i ( V ) ⊆ {− k, ..., k } × { , L double } as well and 〈 double 〉 moves the agents backto states Q × { , L } by doubling their contributions. Doubling every contribution doesnot change whether the sum is negative, so our claim follows inductively in this case, byconsidering the suffix C i +1 , C i +2 , ... . (Note that C , ..., C i do not contain state (cid:3) . So ifthis case happens infinitely often, which occurs only if the sum of contributions is zero, π is accepting.) Lemma 40.
The run π cannot reach a configuration with all leaders in state ⊥ . It is clear that this holds for some reordering of π . To be entirely precise we would have to argue thatit is impossible to reorder the steps at which the leaders enter L (cid:3) to before the steps where the otheragents enter {− A, ..., − } × Q L . roof. If 〈 reject 〉 is ever executed, then a leader (its initiator) enters state (cid:3) , from whichit cannot enter ⊥ . Otherwise, it is not possible for any agent to enter (cid:3) , thus 〈 detect 〉cannot move an agent to ⊥ . Only 〈 double 〉 remains, but it also leaves the leader initiatingthe broadcast in state L . Proposition 15.
For every predicate ϕ : N l → { , } , ϕ ( x , ..., x l ) ⇔ a x + ... + a l x l ≥ ,with a , ..., a l ∈ Z there is a bounded-degree DAf -automaton computing ϕ .Proof. Let π = C C ... denote a fair run of P reset starting in a configuration C with C ( V ) ⊆ Q cancel × { , L } . Note that all valid initial configurations have this form. Asbefore, we refer to agents starting in (( · , L ) , · ) as leaders . If no agent ever enters a state(( ⊥ , · ) , · ), then 〈 reset 〉 is never executed and Lemma 39 implies that we reach a correctconsensus. If 〈 reset 〉 is executed at some step i , we move to a configuration C i +1 withonly states C i +1 ( V ) ⊆ Q cancel × { , L } , i.e. a valid choice for C . Let π := C i +1 C i +2 ... denote the suffix of π starting at i + 1. Due to Lemma 40 we know that at least oneleader is not in state ⊥ when executing 〈 reset 〉, so π has strictly fewer leaders than π ,but at least one (the latter follows directly from the definition of 〈 reset 〉). Hence, weconclude that 〈 reset 〉 is executed only finitely often.It still remains to show that an agent entering (( · , ⊥ ) , · ) at some point implies that〈 reset 〉 will be executed. This follows immediately, as all such states are broadcast-initiating and thus can only execute 〈 resetreset