A General Stabilization Bound for Influence Propagation in Graphs
aa r X i v : . [ c s . D M ] A p r A General Stabilization Bound for InfluencePropagation in Graphs
Pál András Papp
ETH Zürich, [email protected]
Roger Wattenhofer
ETH Zürich, [email protected]
Abstract
We study the stabilization time of a wide class of processes on graphs, in which each node canonly switch its state if it is motivated to do so by at least a λ fraction of its neighbors, forsome 0 < λ <
1. Two examples of such processes are well-studied dynamically changing coloringsin graphs: in majority processes, nodes switch to the most frequent color in their neighborhood,while in minority processes, nodes switch to the least frequent color in their neighborhood. Wedescribe a non-elementary function f ( λ ), and we show that in the sequential model, the worst-case stabilization time of these processes can completely be characterized by f ( λ ). More precisely,we prove that for any ǫ > O ( n f ( λ )+ ǫ ) is an upper bound on the stabilization time of anyproportional majority/minority process, and we also show that there are graph constructions wherestabilization indeed takes Ω( n f ( λ ) − ǫ ) steps. Mathematics of computing → Graph coloring; Theory of compu-tation → Self-organization; Theory of computation → Distributed computing models
Keywords and phrases
Minority process, Majority process .A. Papp and R. Wattenhofer 1
Many natural phenomena can be modeled by graph processes, where each node of the graphis in a state (represented by a color), and each node can change its state based on the statesof its neighbors. Such processes have been studied since the dawn of computer science, by,e.g., von Neumann, Ulam, and Conway. Among the numerous applications of these graphprocesses, the most eminent ones today are possibly neural networks, both biological andartificial.Two fundamental graph processes are majority and minority processes. In a majorityprocess , each node wants to switch to the most frequent color in its neighborhood. Such aprocess is a straightforward model of influence spreading in networks, and as such, it hasvarious applications in social science, political science, economics, and many more [29, 9, 12,18, 23].In contrast, in a minority process , each node wants to switch to the least frequent color inits neighborhood. Minority processes are used to model scenarios where the nodes are motiv-ated to anti-coordinate with each other, like frequency selection in wireless communication,or differentiating from rival companies in economics [24, 6, 7, 11, 8].Majority and minority processes have been studied in several different models, the mostpopular being the synchronous model (where in each step, all nodes can switch simultan-eously) and the sequential model (where in each step, exactly one node switches). Sincein many application areas, it is unrealistic to assume that nodes switch at the exact sametime, we focus on the sequential model in this paper. We are interested in the worst-casestabilization time of such processes, i.e. the maximal number of steps until no node wantsto change its color anymore.Our main parameter describes how easily nodes will switch their color. Previously, theprocesses have mostly been studied under the basic switching rule, when nodes are willingswitch their color for any small improvement. However, it is often more reasonable to assumea proportional switching rule , i.e. that nodes only switch their color if they are motivatedby at least, say, 70% of their neighbors to do so. In general, we describe such proportionalprocesses by a parameter λ ∈ (0 , λ portion of its neighborhood. The stabilization time in such proportional processes(possibly as a function of λ ) has so far remained unresolved.The reason we can analyze proportional majority and minority processes together is thatboth can be viewed as a special case of a more general process of propagating conflictsthrough a network, where the cost of relaying conflicts through a node is proportional to thedegree of the node. This more general process could also be used to model the propagationof information, energy, or some other entity through a network. This suggests that ourresults might also be useful for gaining insights into different processes in a wide range ofother application areas, e.g. the behavior of neural networks.In the paper, we provide a tight characterization of the maximal possible stabilizationtime of proportional majority and minority processes. We show that for maximal stabiliza-tion, a critical parameter is the portion ϕ of the neighborhood that nodes use as ‘outputs’,i.e. neighbors they propagate conflicts to. Based on this, we prove that the stabilization timeof proportional processes follows a transition between quadratic and linear time, describedby the non-elementary function f ( λ ) := max ϕ ∈ (0 , − λ ] log (cid:16) − ϕλ + ϕ (cid:17) log (cid:16) − ϕϕ (cid:17) . (1) A General Stabilization Bound for Influence Propagation in Graphs
More specifically, for any ǫ >
0, we show that on the one hand, O ( n f ( λ )+ ǫ ) is an upperbound on the number of steps of any majority/minority process, and on the other hand,there indeed exists a graph construction where the processes last for Ω( n f ( λ ) − ǫ ) steps. Various aspects of both majority and minority processes on two colors have been studiedextensively. This includes basic properties of the processes [17, 36], sets of critical nodesthat dominate the process [12, 15, 20], complexity and approximability results [21, 3, 10],threshold behavior in random graphs [14, 26], and the analysis of stable states in the process[16, 33, 4, 5, 34, 24]. Modified process variants have also been studied [35, 25], with numerousgeneralizations aiming to provide a more realistic model for social networks [2, 1].However, the question of stabilization time in the processes has almost exclusively beenstudied for the basic switching rule (defined in Section 3.2). Even for the basic rule, apartfrom a straightforward O ( n ) upper bound, the question has remained open for a long timein case of both processes. It has recently been shown in [13] and [27] that both processes canexhibit almost-quadratic stabilization time in case of basic switching, both in the sequentialadversarial and in the synchronous model. On the other hand, the maximal stabilizationtime under proportional switching has remained open so far.It has also been shown that if the order of nodes is chosen by a benevolent player, thenthe behavior of the two processes differs significantly, with the worst-case stabilization timebeing O ( n ) for majority processes [13] and almost-quadratic for minority processes [27]. Inweighted graphs, where the only available upper bound on stabilization time is exponential,it has been shown that both majority and minority can indeed last for an exponentialnumber of steps in various models [22, 28]. The result of [28] is the only one to also studythe proportional switching rule, showing that the exponential lower bound also holds in thiscase; however, since the paper studies weighted graphs with arbitrarily high weights, thismodel differs significantly from our unweighted setting.Stabilization time has also been examined in several special cases, mostly assuming thesynchronous model. The stabilization of a slightly different minority process variant (basedon closed neighborhoods) has been studied in special classes of graphs including grids, treesand cycles [30, 31, 32]. The work of [19] describes slightly modified versions of minorityprocesses which may take O ( n ) or O ( n ) steps to stabilize, but provide better local minima(stable states) upon termination. For majority processes, stabilization has mostly beenstudied from a random initial coloring, on special classes of graphs such as grids, tori andexpanders [14, 26].Various aspects of majority processes have also been studied under the proportionalswitching rule, including sets of critical nodes that dominate the process, and sets of nodesthat always preserve a specific color [38, 37]. However, to our knowledge, the stabilizationtime of the processes with proportional switching has not been studied before. We define our processes on simple, unweighted, undirected graphs G ( V, E ), with V denotingthe set of nodes and E the set of edges. We denote the number of nodes by n = | V | . Theneighborhood of v is denoted by N ( v ), the degree of v by deg( v ) = | N ( v ) | . .A. Papp and R. Wattenhofer 3 We also use simple directed graphs in our proofs. A directed graph is called a DAG if itcontains no directed cycles. A dipartitioning of a DAG is a disjoint partitioning ( V , V ) of V such that each source node is in V , and all edges between V and V all go from V to V .We refer to the set of edges from V to V as a dicut .Given an undirected graph G with edge set E , we also define the directed edge set of G as b E = { ( u, v ) , ( v, u ) | ( u, v ) ∈ E } , i.e. the set of directed edges obtained by taking each edgewith both possible orientations.A coloring is a function γ : V → { black, white } . A state is a current coloring of G . Undera given coloring, we define N s ( v ) = { u ∈ N ( v ) | γ ( v ) = γ ( u ) } and N o ( v ) = { u ∈ N ( v ) | γ ( v ) = γ ( u ) } as the same-color and opposite-color neighborhood of v , respectively.We say that there is a conflict on edge ( u, v ), or that ( u, v ) is a conflicting edge , if u ∈ N o ( v ) in case of a majority process, and if u ∈ N s ( v ) in case of a minority process.In general, we denote the conflict neighborhood by N c ( v ), meaning N c ( v ) = N o ( v ) and N c ( v ) = N s ( v ) in case of majority and minority processes, respectively. We occasionallyalso use N ¬ c ( v ) = N ( v ) \ N c ( v ).If a node v has more conflicts than a predefined threshold (depending on the so-called switching rule in the model, discussed later) in the current state, then v is switchable . Switch-ing v changes its color to the opposite color. If edge ( u, v ) becomes (ceases to be) a conflictingedge when node v switches, then we say that v has created this conflict ( removed this conflict,respectively).A majority/minority process is a sequence of steps (states), where each state is obtainedfrom the previous state by a set of switchable nodes switching. In this paper, we examinesequential processes, when in each step, exactly one node switches. Such a process is stable when there are no more switchable nodes in the graph. By stabilization time , we mean thenumber of steps until a stable state is reached. We study the worst-case stabilization time of majority/minority processes, that is, the max-imal number of steps achievable on any graph, from any initial coloring. In other words, weassume the sequential adversarial model , when the order of nodes (i.e., the next switchablenode to switch in each time step) is chosen by an adversary who maximizes stabilizationtime.It only remains to specify the condition that allows a node to switch its color. The moststraightforward switching rule is the following: ⊲ Rule I (
Basic Switching ). Node v is switchable if | N c ( v ) | − | N ¬ c ( v ) | > | N c ( v ) | > · deg( v ). This rule is shown to allow upto e Θ( n ) stabilization time for both majority [13] and minority [27] processes. However, itis often more realistic to assume a proportional switching rule, based on a real parameter λ ∈ (0 , ⊲ Rule II (
Proportional Switching ). Node v is switchable if | N c ( v ) | − | N ¬ c ( v ) | ≥ λ · deg( v ).Since we have | N c ( v ) | + | N ¬ c ( v ) | = deg( v ), this is equivalent to saying that v is switchableexactly if | N c ( v ) | ≥ λ · deg( v ). In the limit when λ is infinitely small (or, equivalently, as λ approaches from above), we obtain Rule I as a special case of Rule II.In case of Rule I, whenever a node v switches, it is possible that the total number ofconflicts in the graph decreases by 1 only. On the other hand, Rule II implies that theswitching of v decreases the total number of conflicts at least by λ · deg( v ) (we say that v A General Stabilization Bound for Influence Propagation in Graphs
Figure 1
Plot of f ( λ ) and ϕ ∗ ( λ ) for λ ∈ (0 , wastes these conflicts), so in case of Rule II, the total number of conflicts can decrease morerapidly, allowing only a smaller stabilization time. Our findings show that the maximalnumber of steps is different for every distinct λ . f ( λ ) function While the processes have a symmetric definition on each edge by default, it turns out that inorder to maximize stabilization time, each edge has to be used in an asymmetric way. Themost important parameter at each node v is the ratio of neighbors v uses as ‘inputs’ and as‘outputs’. That is, the optimal behavior for each node v is to select ϕ · deg( v ) of its neighborsas outputs (for some ϕ ∈ (0 , − ϕ ) · deg( v ) neighbors as inputs, andonly remove conflicts from the edges coming from these input nodes. Note that with RuleII, whenever a node switches, it can create at most (cid:0) − λ (cid:1) · deg( v ) = − λ · deg( v ) newconflicts, so it is reasonable to assume ϕ ∈ (cid:0) , − λ (cid:3) .Our results show that if all nodes select ϕ as their output rate, then the maximal achiev-able stabilization time is a function of log (cid:16) − ϕλ + ϕ (cid:17) log (cid:16) − ϕϕ (cid:17) . (2)As such, the largest stabilization time can be achieved by maximizing this expression byselecting the optimal ϕ value, as shown in the definition of f in Equation 1. We denotethe optimal value of ϕ (i.e., the argmax of Equation 2) by ϕ ∗ . The function f has nostraightforward closed form, as such a form would require solving( λ + 1) · ϕ · log (cid:18) − ϕϕ (cid:19) = ( λ + ϕ ) log (cid:18) − ϕλ + ϕ (cid:19) , for ϕ , with λ as a parameter. A more detailed discussion of f is available in Appendix C.Figure 1 shows the values of f and ϕ ∗ as a function of λ . The figure shows that both f ( λ ) and ϕ ∗ ( λ ) are continuous, monotonically decreasing and convex. .A. Papp and R. Wattenhofer 5 It is visible that lim λ → f ( λ ) = 1 and lim λ → f ( λ ) = 0. This is in line with what wewould expect: the simple switching rule allows a stabilization time up to e Θ( n ) [13, 27],while even for any large λ <
1, it is still straightforward to present a graph with Ω( n )stabilization time. Our main result is showing that f ( λ ) describes the continuous transitionbetween these two extremes. Note that initially, each node v can have at most deg( v ) conflicts on its incident edges, andeach time when v switches, it wastes λ · deg( v ) conflicts. Therefore, if each node were to ‘use’its own initial conflicts only, then each node could switch at most λ times, and stabilizationtime could never go above O ( n ).Instead, the idea is to take the high number of conflicts initially available at high-degreenodes, and use these conflicts to switch the less wasteful low-degree nodes many times.Specifically, we could have a set of Θ( n )-degree nodes that initially have Ω( n ) conflictsaltogether on their incident edges, and somehow relay these conflicts to another set of O (1)-degree nodes, which only waste O (1) conflicts at each switching. However, due to the largedifference both in degree and in the number of switches, it is not possible to connect thesetwo sets directly; instead, we need to do this through a range of intermediate levels, whichexhibit decreasing degree and increasingly more switches. In order to maximize stabilizationtime, our main task is to move conflicts through these levels as efficiently (i.e., wasting asfew conflicts in the process) as possible.The formula of f ( λ ) describes the efficiency of this process. The rate of inputs to outputs − ϕϕ determines the factor by which the degree decreases at every new level. If ϕ is chosensmall, then − ϕϕ is high, so we only have a few levels until we reach constant degree, andhence the number of switches is increased only a few times. On the other hand, the increasein the number of switches per level is expressed by − ϕλ + ϕ , which is a decreasing function of ϕ .If ϕ is too large, then although we execute this increase more times, each of these increasesis significantly smaller.With a degree decrease rate of − ϕϕ , we can altogether have about log − ϕϕ ( n ) levels untilthe degree decreases from Θ( n ) to Θ(1). If we increase the number of switches by a factorof − ϕλ + ϕ each time, then the O (1)-degree nodes will exhibit (cid:18) − ϕλ + ϕ (cid:19) log − ϕϕ ( n ) = n log ( − ϕλ + ϕ ) log ( − ϕϕ ) ≤ n f ( λ ) (3)switches, with an equation only if ϕ = ϕ ∗ ( λ ). Having e Θ( n ) nodes in the last level, this sumsup to about n f ( λ ) switches altogether. The upper bound on stabilization time is easiest to present in a general form that onlyfocuses on this flow of conflicts in the graph. We define a simpler representation of theprocesses which only keeps a few necessary concepts to describe the flow of conflicts, andignores e.g. the color of nodes or the timing of the switches at each node. In fact, we onlyrequire the number of times s ( v ) each v ∈ V switches, and the number c ( u, v ) of conflictsthat were created by node u and then removed by node v , for each ( u, v ) ∈ b E .For simplicity, given a function c : b E → N , let us introduce the notation c in ( v ) := P u ∈ N ( v ) c ( u, v ) and c out ( v ) := P u ∈ N ( v ) c ( v, u ). A General Stabilization Bound for Influence Propagation in Graphs ◮ Definition 1 ( Conflict Propagation System, CPS ) . Given an undirected graph G , aconflict propagation system is an assignment s : V → N and c : b E → N such that for each v ∈ V , we have c in ( v ) + deg ( v ) ≥ λ · deg ( v ) · s ( v ) + c out ( v ) , for each v ∈ V , we have c out ( v ) ≤ − λ · deg ( v ) · s ( v ) , and for each ( u, v ) ∈ b E , we have c ( u, v ) ≤ s ( u ) . With the choice of s ( v ) and c ( u, v ) described above, any proportional majority or minor-ity process indeed satisfies these properties, and thus provides a CPS. Hence if we upperbound the stabilization time (i.e. the total number of switches P v ∈ V s ( v )) of any CPS, thisestablishes the same bound on the stabilization time of any majority/minority process.Condition 1 is the most complex of the three; it expresses the amount of ‘input conflicts’ c in ( v ) required to switch v an s ( v ) times altogether. Every time after v switches, it hasat most − λ · deg( v ) conflicts on the incident edges, so it needs to acquire λ · deg( v ) newconflicts to reach the threshold of λ · deg( v ) and be switchable again; this results in the λ · deg( v ) · s ( v ) term. Moreover, if in the meantime, the neighboring nodes remove someof the conflicts from the incident edges (expressed by c out ( v )), then this also has to becompensated for by extra input conflicts. Finally, the extra deg( v ) term comes from the (atmost) deg( v ) conflicts that are already on the incident edges in the initial coloring. For adetailed discussion of this condition, see Appendix A.Condition 2 also holds, since each time when v switches, it creates at most − λ · deg( v )conflicts on the incident edges. Each time u switches, it can only create one conflict ona specific edge, so condition 3 also follows. Hence any majority/minority process indeedprovides a CPS.Finally, we need a technical step to get rid of the extra deg( v ) term in condition 1. Notethat this term becomes asymptotically irrelevant as s ( v ) grows; hence, our approach is tohandle fewer-switching nodes separately, and require condition 1 only for nodes with large s ( v ). More formally, we select a constant s , and we refer to nodes v with s ( v ) < s as basenodes . We then consider Relaxed CPSs , where, given this extra parameter s , condition 1 isreplaced by: for each v ∈ V with s ( v ) ≥ s , we have c in ( v ) ≥ λ · deg ( v ) · s ( v ) + c out ( v ) , This relaxation comes at the cost of an extra ǫ additive term in the exponent of our upperbound. We now outline the proof of the upper bound on the number of switches. A more detaileddiscussion of this proof is available in Appendix A.
We start by noting that since moving a conflict through a node is wasteful, it is suboptimalto have two neighboring nodes that both transfer a conflict to each other, or more generally,to move a conflict along any directed cycle. Therefore, in a CPS with maximal stabilizationtime, the conflicts are essentially moved along the edges of a DAG. To formalize this, givena CPS, let us say that a directed edge ( u, v ) ∈ b E is a real edge if c ( u, v ) > ◮ Lemma 2.
There exists a CPS with maximal stabilization time where the real edges forma DAG. .A. Papp and R. Wattenhofer 7
Proof.
Among the CPSs on n nodes with maximal stabilization time, let us take the CPS P where the sum P e ∈ b E c ( e ) is minimal. Assume that there is a directed cycle along the realedges of this CPS, and let c ( e ) denote the minimal value of function c along this cycle.Now consider the CPS P ′ where the value of c on each edge of this directed cycle isdecreased by c ( e ). Since in each affected node, the inputs and outputs have been decreasedby the same value, P ′ still satisfies all three conditions, and thus it is also a valid CPS.Moreover, P ′ has the same amount of total switches as P . However, since c ( e ) >
0, thesum of c ( e ) values in P ′ is less than in P , which contradicts the minimality of P . ◭ Hence for the upper bound proof, we can assume that the real edges of the CPS forma DAG. In the rest of the section, we focus on this DAG composed of the real edges of theCPS. We first show that for convenience, we can also assume that each base node is a sourcein this DAG. ◮ Lemma 3.
There exists a CPS with maximal stabilization time where each base node is asource node of the DAG.
Proof.
Note that by removing an input edge ( u, v ) of a base node v (that is, setting c ( u, v )to 0), the remaining CPS is still valid, since node v does not have to satisfy condition 1R,and in node u , only the sum of outputs was decreased. Therefore, we can remove all theinput edges of each base node, and hence base nodes will all become source nodes of theDAG. ◭◮ Lemma 4.
For each directed edge ( u, v ) in the DAG where u is a source node, c ( u, v ) = O (1) . More specifically, c ( u, v ) ≤ s . Proof. If u is a base node, then s ( u ) ≤ s , so c ( u, v ) ≤ s due to condition 3. Otherwise,condition 1R must hold, and since u has no input nodes, we get 0 ≥ c out ( u ) + λ · deg( u ) · s ( u ),hence c out ( u ) = 0, so c ( u, v ) = 0 for every v . Thus c ( u, v ) ≤ s . ◭ As a main ingredient of the proof, we define a way to measure how close we are to propagatingconflicts optimally. ◮ Definition 5 ( Potential ) . Given a real edge e ∈ b E , the potential of e is defined as P ( e ) = c ( e ) /f ( λ ) . For simplicity of notation, we also use P to denote the function x → x /f ( λ ) on real numbersinstead of edges.Intuitively speaking, the potential function describes the cost of sending a specific numberof conflicts through a single edge, in terms of the number of initial conflicts used up for this.Note that since f ( λ ) <
1, the function P is always convex. This shows that sending a highnumber of conflicts through a single edge is more costly than sending the same amount ofconflicts through multiple edges.As the following lemma shows, the potential is defined in such a way that the totalpotential can never increase when passing through a node in the DAG; the best that a nodecan do is to preserve the input potential if it relays conflicts optimally. ◮ Lemma 6.
For any non-source node v of the DAG, with input edges from N in ( v ) andoutput edges to N out ( v ) , we have X u ∈ N in ( v ) P ( u, v ) ≥ X u ∈ N out ( v ) P ( v, u ) . A General Stabilization Bound for Influence Propagation in Graphs
Proof. If v is not a source, then by Lemma 3 it is not a base node, and thus has to satisfy con-dition 1R. In our DAG, c in and c out correspond to P u ∈ N in ( v ) c ( u, v ) and P u ∈ N out ( v ) c ( v, u ),respectively. Assume that we fix the value of c in and c out . Since the potential function P is convex, the incoming potential (left side) is minimized if c in is split as equally amongthe input neighbors as possible. On the other hand, the outgoing potential (right side) ismaximized if c out is split as unequally among outputs as possible, so all output edges presentin the DAG have the maximal possible number of switches, meaning c ( v, u ) = s ( v ) for every u ∈ N out ( v ).Assume that a fraction ϕ of v ’s incident edges are outgoing, i.e. | N out ( v ) | = ϕ · deg( v )and | N in ( v ) | = (1 − ϕ ) · deg( v ). By condition 1R, we have c in ≥ λ · deg( v ) · s ( v ) + c out ; with c out = ϕ · deg( v ) · s ( v ), this gives c in ≥ ( λ + ϕ ) · deg( v ) · s ( v ). If split evenly among the(1 − ϕ ) · deg( v ) inputs, this means c in | N in ( v ) | ≥ ( λ + ϕ ) · deg( v ) · s ( v )(1 − ϕ ) · deg( v ) = (cid:18) λ + ϕ − ϕ (cid:19) · s ( v )switches for each input node. The inequality on the potential then comes down to X u ∈ N in ( v ) P ( u, v ) ≥ (1 − ϕ ) · deg( v ) · (cid:18) λ + ϕ − ϕ · s ( v ) (cid:19) /f ( λ ) ≥≥ ϕ · deg( v ) · s ( v ) /f ( λ ) ≥ X u ∈ N out ( v ) P ( v, u ) . To show that the inequality in the middle holds, we only require (cid:18) λ + ϕ − ϕ (cid:19) /f ( λ ) ≥ ϕ − ϕ , or, put otherwise, 1 f ( λ ) log (cid:18) λ + ϕ − ϕ (cid:19) ≥ log (cid:18) ϕ − ϕ (cid:19) . Since ϕ − ϕ < (cid:16) λ + ϕ − ϕ (cid:17) log (cid:16) ϕ − ϕ (cid:17) = log (cid:16) − ϕλ + ϕ (cid:17) log (cid:16) − ϕϕ (cid:17) ≤ f ( λ ) . This holds by the definition of f ( λ ). Note that this also shows that equality can only beachieved if the output rate ϕ is indeed chosen as the argmax value ϕ ∗ ( λ ). ◭ Lemma 6 provides the key insight to the main idea of our proof: if we process the nodesof a DAG according to a topological ordering, always maintaining a dicut of outgoing edgesfrom the already processed part of the DAG, then this potential cannot ever increase whenadding a new node. ◮ Lemma 7.
Given a dicut S of a dipartitioning in the DAG, we have X e ∈ S P ( e ) = O ( n ) . Proof (Sketch).
Each dipartitioning can be obtained by starting from the trivial diparti-tioning where V only contains the source nodes of the DAG, and then iteratively adding .A. Papp and R. Wattenhofer 9 nodes one by one to this initial V . The number of outgoing edges from this initial V (theset of source nodes) is upper bounded by | E | = O ( n ). According to Lemma 4, the numberof switches (and hence the potential) on each edge of the dicut is at most constant, so thesum of potential in this initial dicut is also O ( n ).Now consider the process of iteratively adding nodes to this initial V to obtain a specificdipartitioning. Whenever we add a new node v to V , the incoming edges of v are removedfrom the dicut, and the outgoing edges of v are added to the dicut. According to Lemma 6,the potential on the outgoing edges of v is at most as much as the potential on the incomingedges, so the sum of potential can not increase in any of these steps. Therefore, whenarriving at the final V , the sum of potential on the cut edges is still at most O ( n ). ◭ Finally, we present our main lemma that uses the previous upper bound on potential inorder to upper bound the number of switches in the CPS. ◮ Lemma 8.
Given a CPS and an integer a ∈ { , ..., n } , let A = { v ∈ V | a ≤ deg(v) < 2a } .For the total number of switches s ( A ) = P v ∈ A s ( v ) , we have s ( A ) = O (cid:16) n f ( λ ) · a − f ( λ ) (cid:17) . Proof (Sketch).
If the input edges of the nodes in A would form the dicut of a dipartitioning,then we could directly use Lemma 7 to upper bound the number of switches in A throughthe potential of the input edges. However, the nodes of A might be scattered arbitrarily inthe DAG, and if there is a directed path from one node in A to another, then the ‘same’potential might be used to switch more than one node in A . Thus we cannot apply Lemma7 directly. Instead, our proof consists of two parts.1. First, we define so-called responsibilities for the nodes in A . Given a node v ∈ A ,the idea is to devise two different functions: (i) a function ∆ c ( e ), defined on each edge e which is contained in any directed path starting from v , and (ii) a function ∆ s ( v ), whichis defined on any node v that is reachable from v on a directed path. Intuitively, we willconsider the conflicts ∆ c ( e ) and the switches ∆ s ( v ) to be those that are indirectly ‘theeffects of the switches of v ’. More specifically, ∆ c and ∆ s are chosen such that if they areremoved (subtracted from the CPS), then v has no output edges in the DAG anymore, andthe resulting assignment s ′ ( v ) = s ( v ) − ∆ s ( v ) and c ′ ( e ) = c ( e ) − ∆ c ( e ) still remains a validCPS. Hence the subtraction results in a CPS where v has no directed path to other nodesin A anymore. This shows that we can keep on executing this step for each v ∈ A until notwo nodes in A are connected by a directed path, at which point we can apply Lemma 7 tothe resulting graph.Whenever we process such a node v ∈ A , we define the responsibility of v as R ( v ) := s ( v ) + P ∆ s ( v ), where the sum is understood over all the nodes v ∈ A that are reachablefrom v . The main idea is that we ‘reassign’ these switches to v from other nodes in A .This method is essentially a redistribution of switches in the CPS, so we have P v ∈ A s ( v ) = P v ∈ A R ( v ) altogether.Furthermore, our definition of ∆ s ( v ) will ensure that R ( v ) = O (1) · s ( v ). Intuitively, thiscan be explained as follows. Recall that with Rule II, the ratio of output to input conflictsis always upper bounded by a constant factor (below 1) at every node, since switchingalways wastes a specific proportion of conflicts. Hence, over any path starting from v , thenumber of outputs that can be attributed to v forms a geometric series. As the ratio ofthe geometric series is below 1, the total amount of conflicts caused by v this way is still within the magnitude of the input conflicts of v . Since each node in A has similar degree(and thus requires similar number of input conflicts for one switching), these conflicts canonly switch nodes in A approximately the same number of times as v can be switched byits own inputs. A more detailed discussion of this responsibility technique is available inAppendix A.2. For the second part of the proof, we show the claim in this modified CPS with nodirected path between nodes in A . This implies that there exists a dipartitioning where thenodes of A are in V , but all their input nodes are in V . This means that all the input edgesof each node in A are included in the dicut S of the partitioning.Consider a node v ∈ A . Due to condition 1R, v has at least λ · deg( v ) · s ( v ) input conflicts.Even if these are distributed equally on all incident edges of v (this is the case that amountsto the lowest total potential, since P is convex), this requires a total input potential ofdeg( v ) · P ( λ · s ( v )) = deg( v ) · s ( v ) /f ( λ ) · λ /f ( λ ) at least. Recall that Lemma 7 shows that the total potential on all edges in S is O ( n ). Ourtask is hence to find an upper bound on P v ∈ A s ( v ), subject to X v ∈ A deg( v ) · s ( v ) /f ( λ ) · λ /f ( λ ) = O ( n ) . Since the last factor on the left side is a constant, we can simply remove it and include itin the O ( n ) term. Furthermore, the degree of each node in A is at least a , so by lowerbounding each degree by a , we get X v ∈ A s ( v ) /f ( λ ) = O ( n ) · a . Given this upper bound on P v ∈ A P ( s ( v )), since the function P is convex, the sum of switches P v ∈ A s ( v ) is maximal when each node in A switches the same amount of times (i.e. thereis an s such that s ( v ) = s for every v ∈ A ), giving | A | · s /f ( λ ) = O ( n ) · a . With this upper bound, | A | · s is maximal if | A | is as large as possible and s as small aspossible (again because P grows faster than linearly). Clearly | A | ≤ n , so assuming | A | = n ,we get s /f ( λ ) = O ( n ) · a , which means that s = O ( n f ( λ ) ) · a − f ( λ ) , and thus for the total number of switches in A , we get | A | · s = O ( n f ( λ ) ) · a − f ( λ ) . ◭ It only remains to sum up this bound for the appropriate intervals to obtain our finalbound. Let us consider the intervals [1 , , , a = 2 k for each factor of 2up to n , which is a disjoint partitioning of the possible degrees. Note that for these specificvalues of a , the sum P ∞ k =0 (2 k ) − f ( λ ) converges to a constant according to the ratio test.In other words, the sum is dominated by the number of switches of the lowest (constant)degree nodes, and hence, the total number of switches in the graph can be upper boundedby O (1) · n f ( λ ) .Recall that since we work with Relaxed CPSs, we lose an ǫ in the exponent of this upperbound when we carry the result over to an original CPS. .A. Papp and R. Wattenhofer 11 log − ϕϕ ( n )levels Θ (cid:0) n log n (cid:1) nodes s switches − ϕλ + ϕ · s switches d -regular bipartite ϕ − ϕ · d -regular bipartite Figure 2
Consecutive levels of the lower bound construction ◮ Theorem 9.
In any CPS with parameter λ , we have P v ∈ V s ( v ) = O ( n f ( λ )+ ǫ ) for any ǫ > . Since we have established that every majority/minority process provides a CPS, the upperbound on their stabilization time also follows. ◮ Corollary 10.
Under Rule II with any λ ∈ (0 , , every majority/minority process stabilizesin time O ( n f ( λ )+ ǫ ) for any ǫ > . Having established the most efficient way to relay conflicts, the high-level design of thematching lower bound construction is rather straightforward, following the level-based ideadescribed in Section 4.Given λ , we first determine the optimal output rate ϕ = ϕ ∗ ( λ ). We then create aconstruction consisting of distinct levels, where each level has the same size, and each consistsof a set of nodes that have the same degree. Since the degree should decrease by a factorof ϕ − ϕ in each new level from top to bottom, we can add L = log − ϕϕ ( n ) such levels tothe graph. If each of these level has Θ( n log n ) nodes, then with the appropriate choice ofconstants, the total number of nodes is below n .Each node in the construction is only connected to other nodes on the levels immediatelyabove or below its own. All conflicts are propagated down in the graph, from upper to lowerlevels, so the upper neighbors of a node are always used as inputs, while the lower neighborsare always used as outputs. For the optimal propagation of conflicts, each node v must havethe optimal input-output rate, i.e. an up-degree of (1 − ϕ ) · deg( v ) and a down-degree of ϕ · deg( v ). Thus each consecutive level pair forms a regular bipartite graph, with ϕ − ϕ of thedegree of the level pair above. The construction is illustrated in Figure 2.Our parameters λ and ϕ also determine that the number of switches should increase bya factor − ϕλ + ϕ on each new level. If we can always increase the switches at this rate, theneach node on the lowermost level will switch (cid:18) − ϕλ + ϕ (cid:19) log − ϕϕ ( n ) = n log ( − ϕλ + ϕ ) log ( − ϕϕ ) = n f ( λ ) , times, where the last equation holds because we are using ϕ = ϕ ∗ ( λ ). Since there are e Θ( n ) nodes on the lowermost level, the switches in this level already amount to a total of e Θ( n f ( λ ) ), matching the upper bound. However, note that when ϕ ∗ ( λ ) or − ϕλ + ϕ is irrational, we can only use close enough rationalapproximations of these values. This comes at the cost of losing a small ǫ in the exponent. ◮ Theorem 11.
Under Rule II with a wide range of λ values, there is a graph constructionand initial coloring where majority/minority processes stabilize in time Ω( n f ( λ ) − ǫ ) for any ǫ > . This level-based structure describes the general idea behind our lower bound construction.However, the main challenge of the construction is in fact designing the connection betweensubsequent levels. In particular, this connection has to make sure that conflicts are indeedalways relayed optimally, i.e. no potential is wasted between any two levels.Recall from the proof of Lemma 6 that this is only possible if between any two consecutiveswitches of a node v , it is exactly a λ + ϕ − ϕ fraction of v ’s upper neighbors that switch. Moreover,these switching λ + ϕ − ϕ · deg( v ) upper neighbors always have to be of the right color, i.e. theyneed to switch to the opposite of v ’s current color in case of majority processes, and to thesame color in case of minority processes. Since the upper neighbors of v are in the same level,we also have to ensure that throughout the entire process, each upper neighbor switches thesame number of times altogether.These conditions impose heavy restrictions on the possible ways to connect two sub-sequent levels. If the conditions hold for a node v (i.e. the sequence of switches of v ’supper neighbors can be split into λ + ϕ − ϕ · deg( v )-size consecutive appropriate-colored subsets,in an altogether balanced way), then we say that v ’s upper neighbors follow a valid controlsequence .On the other hand, in order to argue about levels in general, we want each level to behavein a similar way. The easiest way to achieve this is to have a one-to-one correspondencebetween the nodes of different levels, and ensure that each level repeats the same sequenceof steps periodically, but in a different pace. That is, we want to connect the levels in sucha way that when a level exhibits a specific pattern of switches, then this allows the nodes ofthe next level to replicate the exact same pattern of switches, but more times.Thus the key task in our lower bound constructions is to develop a so-called controlgadget , which is essentially a bipartite graph that fulfills these two requirements: it admitsa scheduling of switches such that (i) the upper neighborhood of each lower node follows avalid control sequence, and (ii) while the upper level executes a sequence s times, the lowerlevel executes the same sequence − ϕλ + ϕ · s times. Given such a control gadget, we can connectthe subsequent level pairs of our construction using this gadgets. This allows us to indeedincrease the number of switches by a − ϕλ + ϕ factor in each new level, resulting in a total of e Θ( n f ( λ ) ) switches as described above.However, developing a control gadget is a difficult combinatorial task in general: itdepends on many factors including divisibility questions, and whether our parameters canbe expressed as a fraction of small integers. A detailed discussion of control gadget design andthe λ values covered by Theorem 11 is available in Appendix B. In particular, we present amethod which allows us to develop a control gadget for every small λ value below a thresholdof approximately 0 .
476 (more specifically, as long as λ + ϕ − ϕ ≤ ). The same technique alsoprovides a control gadget for some larger λ values above the threshold, but only when thecorresponding switch increase ratio − ϕλ + ϕ can be expressed as a fraction of relatively smallintegers. Furthermore, Appendix B also describes a simpler solution technique to the controlgadget problem; this leaves a slightly larger gap to the upper bound, but it works for any λ without much difficulty. .A. Papp and R. Wattenhofer 13 References Victor Amelkin, Francesco Bullo, and Ambuj K Singh. Polar opinion dynamics in socialnetworks.
IEEE Transactions on Automatic Control , 62(11):5650–5665, 2017. Vincenzo Auletta, Ioannis Caragiannis, Diodato Ferraioli, Clemente Galdi, and Giuseppe Per-siano. Generalized discrete preference games. In
Proceedings of the Twenty-Fifth InternationalJoint Conference on Artificial Intelligence , IJCAI’16, page 53–59. AAAI Press, 2016. Cristina Bazgan, Zsolt Tuza, and Daniel Vanderpooten. Complexity and approximation ofsatisfactory partition problems. In
International Computing and Combinatorics Conference ,pages 829–838. Springer, 2005. Cristina Bazgan, Zsolt Tuza, and Daniel Vanderpooten. The satisfactory partition problem.
Discrete applied mathematics , 154(8):1236–1245, 2006. Cristina Bazgan, Zsolt Tuza, and Daniel Vanderpooten. Satisfactory graph partition, variants,and generalizations.
European Journal of Operational Research , 206(2):271–280, 2010. Olivier Bodini, Thomas Fernique, and Damien Regnault. Crystallization by stochastic flips.In
Journal of Physics: Conference Series , volume 226, page 012022. IOP Publishing, 2010. Olivier Bodini, Thomas Fernique, and Damien Regnault. Stochastic flips on two-letter words.In , pages 48–55. SIAM, 2010. Zhigang Cao and Xiaoguang Yang. The fashion game: Network extension of matching pennies.
Theoretical Computer Science , 540:169–181, 2014. Luca Cardelli and Attila Csikász-Nagy. The cell cycle switch computes approximate majority.
Scientific reports , 2:656, 2012. Ning Chen. On the approximability of influence in social networks.
SIAM Journal on DiscreteMathematics , 23(3):1400–1415, 2009. Jacques Demongeot, Julio Aracena, Florence Thuderoz, Thierry-Pascal Baum, and OlivierCohen. Genetic regulation networks: circuits, regulons and attractors.
Comptes RendusBiologies , 326(2):171–188, 2003. MohammadAmin Fazli, Mohammad Ghodsi, Jafar Habibi, Pooya Jalaly, Vahab Mirrokni, andSina Sadeghian. On non-progressive spread of influence through social networks.
TheoreticalComputer Science , 550:36–50, 2014. Silvio Frischknecht, Barbara Keller, and Roger Wattenhofer. Convergence in (social) influencenetworks. In
International Symposium on Distributed Computing , pages 433–446. Springer,2013. Bernd Gärtner and Ahad N Zehmakan. Color war: Cellular automata with majority-rule. In
International Conference on Language and Automata Theory and Applications , pages 393–404.Springer, 2017. Bernd Gärtner and Ahad N Zehmakan. Majority model on random regular graphs. In
LatinAmerican Symposium on Theoretical Informatics , pages 572–583. Springer, 2018. Michael U Gerber and Daniel Kobler. Algorithmic approach to the satisfactory graph parti-tioning problem.
European Journal of Operational Research , 125(2):283–291, 2000. Eric Goles and Jorge Olivos. Periodic behaviour of generalized threshold functions.
DiscreteMathematics , 30(2):187–189, 1980. Mark Granovetter. Threshold models of collective behavior.
American Journal of Sociology ,83(6):1420–1443, 1978. Sandra M Hedetniemi, Stephen T Hedetniemi, KE Kennedy, and Alice A Mcrae. Self-stabilizing algorithms for unfriendly partitions into two disjoint dominating sets.
ParallelProcessing Letters , 23(01):1350001, 2013. Clemens Jeger and Ahad N Zehmakan. Dynamic monopolies in reversible bootstrap percola-tion. arXiv preprint arXiv:1805.07392 , 2018. Dominik Kaaser, Frederik Mallmann-Trenn, and Emanuele Natale. On the voting time of thedeterministic majority process. In , 2016. Barbara Keller, David Peleg, and Roger Wattenhofer. How even tiny influence can have a bigimpact! In
International Conference on Fun with Algorithms , pages 252–263. Springer, 2014. David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence througha social network. In
Proceedings of the ninth ACM SIGKDD international conference onKnowledge discovery and data mining , pages 137–146. ACM, 2003. Jeremy Kun, Brian Powers, and Lev Reyzin. Anti-coordination games and stable graphcolorings. In
International Symposium on Algorithmic Game Theory , pages 122–133. Springer,2013. Yuezhou Lv and Thomas Moscibroda. Local information in influence networks. In
Interna-tional Symposium on Distributed Computing , pages 292–308. Springer, 2015. Ahad N Zehmakan. Opinion forming in erdös-rényi random graph and expanders. In . Schloss Dagstuhl-Leibniz-Zentrumfuer Informatik GmbH, Wadern/Saarbruecken . . . , 2018. Pál András Papp and Roger Wattenhofer. Stabilization Time in Minority Processes. In , volume 149 of
Leib-niz International Proceedings in Informatics (LIPIcs) , pages 43:1–43:19, Dagstuhl, Germany,2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. Pál András Papp and Roger Wattenhofer. Stabilization Time in Weighted Minority Processes.In ,volume 126 of
Leibniz International Proceedings in Informatics (LIPIcs) , pages 54:1–54:15,Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. David Peleg. Local majorities, coalitions and monopolies in graphs: a review.
TheoreticalComputer Science , 282(2):231–257, 2002. Damien Regnault, Nicolas Schabanel, and Éric Thierry. Progresses in the analysis of stochastic2d cellular automata: A study of asynchronous 2d minority. In Luděk Kučera and Ant-onín Kučera, editors,
Mathematical Foundations of Computer Science 2007 , pages 320–332.Springer Berlin Heidelberg, 2007. Damien Regnault, Nicolas Schabanel, and Éric Thierry. On the analysis of “simple” 2dstochastic cellular automata. In
International Conference on Language and Automata Theoryand Applications , pages 452–463. Springer, 2008. Jean-Baptiste Rouquier, Damien Regnault, and Éric Thierry. Stochastic minority on graphs.
Theoretical Computer Science , 412(30):3947–3963, 2011. Khurram H Shafique and Ronald D Dutton. On satisfactory partitioning of graphs.
CongressusNumerantium , pages 183–194, 2002. Saharon Shelah and Eric C Milner. Graphs with no unfriendly partitions.
A tribute to PaulErdös , pages 373–384, 1990. Ariel Webster, Bruce Kapron, and Valerie King. Stability of certainty and opinion in in-fluence networks. In
Advances in Social Networks Analysis and Mining (ASONAM), 2016IEEE/ACM International Conference on , pages 1309–1320. IEEE, 2016. Peter Winkler. Puzzled: Delightful graph theory.
Commun. ACM , 51(8):104–104, August2008. Ahad N Zehmakan. Target set in threshold models.
Acta Mathematica Universitatis Comeni-anae , 88(3), 2019. Ahad N Zehmakan. Tight bounds on the minimum size of a dynamic monopoly. In
In-ternational Conference on Language and Automata Theory and Applications , pages 381–393.Springer, 2019. Appendices
A Discussion of upper bound proof
In this section, we discuss some parts of the upper bound proof in more detail.
A.1 Majority and minority processes as CPSs
When introducing the concept of CPS as the common abstraction of majority and minorityprocesses, it is rather straightforward that conditions 2 and 3 are fulfilled, since each timewhen a node v switches, it can only create 1 conflict on at most − λ · deg( v ) incident edges.Condition 1, however, requires some more discussion.Between each two consecutive switches of v , we know that at least λ · deg( v ) − − λ · deg( v ) = λ · deg( v ) new conflicts must be wasted (i.e. removed) to raise the number of con-flicts on incident edges above the switchability threshold of λ · deg( v ) again. Furthermore,if between the two switches there are also conflicts that are removed from the incident edgesby neighboring nodes (i.e., outputs), then each of these conflicts have to be replaced by anew one (an extra input) to have the required number of conflicts for switchability again.More formally, let in i be the number of conflicts created on, and out i the number ofconflicts removed from the edges of v between the ( i − th and i th switching of v , for i ∈{ , ..., s ( v ) } . If out i further conflicts are removed from v ’s edges before the ( i +1) th switchingof v , then v needs to obtain out i further conflicts to reach the threshold of λ · deg( v ) andbe switchable for the ( i + 1) th time. This implies in i ≥ λ · deg( v ) + out i ; adding this up forall i provides condition 1.This explains why the relaxed version of condition 1 holds asymptotically. However,there are some edge cases that make the process slightly differ from this asymptotic behavior.Besides input conflicts (created by a neighbor of v ), there may also be original conflicts onthe edges incident to v , which were not created by a neighbor but were present from thebeginning due to the initial coloring of the graph. These conflicts can be used by v justlike an input conflict when switching, and hence it is in fact the sum of original and inputconflicts that has to be larger than the required number of conflicts for switching (i.e., thesum of outputs plus λ · deg( v ) · s ( v )). However, the number of original conflicts on incidentedges is at most deg( v ), so adding an extra term of deg( v ) on the left side of condition 1(i.e., requiring only that c in ( v ) ≥ λ · deg( v ) · s ( v ) + c out ( v ) − deg( v )) gives an inequalitythat holds for any node in a majority/minority process, even if a node v uses up to deg( v )original conflicts while switching.Also, the behavior of the process is slightly different before the first and after the lastswitch. On the one hand, in the first round, v needs to use λ · deg( v ) conflicts that are allinputs or original conflicts (whereas in later rounds, up to − λ · deg( v ) of the used conflictsmight be ones that were created by v in the previous round). Therefore, because of this firstround, the total number of used conflicts is actually λ · deg( v ) − λ · deg( v ) = − λ · deg( v )higher than in the asymptotic case. On the other hand, there is no need to compensatefor output conflicts that are removed after the very last switching of v , since the number ofconflicts in the final state of the graph is irrelevant; therefore, there may be up to − λ · deg( v )output conflicts that do not have to be compensated. Note, however, that these two edgecases do not require us to further modify condition 1, since the two new terms cancel eachother on the right side. A.2 Relaxing the CPS definition
While the extra deg( v ) term in condition 1 becomes asymptotically irrelevant if a nodeswitches many times (i.e. s ( v ) is large), the precise analysis still requires us to introducethe relaxed version of the CPS concept where condition 1 does not contain this extra term.Consider a slightly smaller switching rule parameter λ − ǫ , for any small ǫ >
0. Notethat c in ( v ) ≥ ( λ − ǫ ) · deg( v ) · s ( v ) + c out ( v ) automatically implies c in ( v ) + deg( v ) ≥ λ · deg( v ) · s ( v ) + c out ( v ) for s ( v ) large enough; that is, ǫ · deg( v ) · s ( v ) ≥ deg( v ) holds whenever s ( v ) ≥ ǫ , so the additive term is not required. However, having λ − ǫ instead of λ in thecondition also results in the slightly less tight upper bound of O ( n f ( λ − ǫ ) ).Therefore, we take the following approach. Assume we have a λ for which we want toshow the upper bound. We select a small ǫ >
0, and define λ := λ − ǫ . We define a constantswitching threshold s := ǫ ; nodes v with s ( v ) < s will be the base nodes. The base nodesin our graph then do not satisfy condition 1; however, since they only switch a few times,they have a limited influence on the process. By the choice of s , the remaining nodes satisfycondition 1 with λ , even without the extra term, so the relaxed version of condition 1 indeedholds with s and λ .We then follow the proof outlined before with Relaxed CPSs. This allows us to upperbound stabilization time by O ( n f ( λ ) ) = O ( n f ( λ − ǫ ) ). Since f is continuous and thetechnique works for any ǫ >
0, this establishes an upper bound of O ( n f ( λ )+ ǫ ) for any ǫ > λ of Rule II, our upper bound amounts to O ( n f ( λ )+ ǫ )steps. A.3 Potential of dicuts
Recall that Lemma 6 shows that the output potential of any node can be at most as muchas its input potential. This allows us to upper bound the total potential in any dicut of thegraph.We use trivial dipartitioning to refer to the dipartitioning ( V , V ) where V only containsthe source nodes of the DAG, and V contains all other nodes. ◮ Lemma 12.
Every dipartitioning can be obtained from the trivial partitioning through asequence of steps such that each intermediate step is also a dipartitioning.
Proof.
The statement clearly holds for the trivial dipartitioning. For any other dipartition-ing, we can prove the statement by induction on the number of nodes in V . Given anyother dipartitioning ( V , V ), let us take a topological ordering of the DAG which beginswith all the source nodes. Let us restrict this ordering to V , and let v be the last nodeof the ordering which is in V . Since the ordering is topological, there are no edges from v to V \ { v } . Therefore, ( V \ { v } , V ∪ { v } ) is also a dipartitioning, so there exists a validsequence to obtain it due to the induction hypothesis. Appending the dipartitioning ( V , V ) to the end of this sequence provides a sequence for ( V , V ). ◭ From this, the proof of Lemma 7 already follows. The dicut of the trivial dipartitioninghas potential at most O ( n ). Due to Lemma 6, the potential of the dicut can only decreasethroughout the sequence. This shows that the potential of dicut ( V , V ) is still at most asmuch potential of the trivial dipartitioning. A.4 Responsibility technique for the upper bound
We now discuss the proof of Lemma 8 in detail. Note that in the definition of a (relaxed) CPS,we defined the functions s and c as integer-valued, since this definition is intuitively closer to our original majority/minority processes. However, one can observe that all our statementsin Section 5 still hold if s and c are allowed to take any value among the nonnegative realnumbers. Since allowing non-integer values allows for a simpler proof of Lemma 8, in thefollowing, we consider this not-necessarily-integer version of CPSs in order to avoid somediscretization challenges.As an edge case, note that source nodes switch at most O (1) time according to Lemma 4,so altogether, they contribute at most O ( n ) to the total number of switches. Therefore, wecan ignore them in the analysis, and consider only the remaining nodes of the graph whichsatisfy the relaxed version of condition 1.The main structure of the proof has already been outlined in Section 5.3; it only remainsto describe the responsibility technique devised for the first part of the proof.Let us take a topological ordering of the nodes in A , and let us iterate through the nodesof A in this order. For each next node v in this ordering, we define the responsibility of v ,denoted R ( v ). As outlined, we introduce a function ∆ c ( e ) on the edges and ∆ s ( v ) on thevertices for each such v , and after having processed v , we subtract these functions from c ( e ) and s ( v ), respectively.That is, let c ′ : b E → R and s ′ : b V → R , initially set to c ′ ( e ) := c ( e ) and s ′ ( v ) := s ( v ) forevery vertex v ∈ V and every directed edge e of the DAG. Every time when we process thenext node v , we define a new ∆ c ( e ) and ∆ s ( v ) based on the effects of v , and reduce c ′ ( e )by ∆ c ( e ) on every e ∈ b E , and reduce s ′ ( v ) by ∆ s ( v ) on every v ∈ V . Due to the definitionof ∆ c ( e ) and ∆ s ( v ), the resulting c ′ ( e ) and s ′ ( v ) will still be a valid CPS after each step ofthe process. After processing all v ∈ A , we obtain a final c ′ ( e ) and s ′ ( v ) for the second partof the proof outlined in Lemma 8. A.4.1 Definition of ∆ c and ∆ s Let us now define the functions ∆ c and ∆ s . Let v be the next node of the topologicalordering. In order to process the switches ‘caused by’ v , we take a topological ordering ofthe nodes reachable from v on the current edges of the DAG (that is, the real edges withregard to the current c ′ ( e )). The first node of the ordering is clearly v itself; for each outputedge ( v , u ) ∈ b E of v , let ∆ c ( v , u ) = c ′ ( v , u ). That is, after the current ∆ c ( e ) will besubtracted from c ′ ( e ), all output edges ( v , u ) will have c ( v , u ) = 0, and thus cease to bereal edges, turning v into a new sink node of the DAG.In general, let v be the next node in the topological ordering of the nodes reachable from v (i.e., the inner loop of the algorithm). Since the ordering is topological, all input edges( u, v ) of v already have a value ∆ c ( u, v ) assigned to them (if an input node u is not reachablefrom v , we consider ∆ c ( u, v ) to have the default value of 0). Let ∆ in := P ( u,v ) ∈ b E ∆ c ( u, v ).First of all, we generally define∆ s ( v ) := ∆ in λ · deg( v ) . (4)Furthermore, we define ∆ c ( v, w ) on the output edges ( v, w ) of v as follows. Similarly to thedefinition of ∆ in , let ∆ out := P ( v,w ) ∈ b E ∆ c ( v, w ). Our assignment will ensure two things.On the one hand, we assign ∆ c ( v, w ) values such that ∆ out = ∆ s ( v ) · − λ · deg( v ); or, putotherwise through the definition of ∆ s ( v ), ∆ out = − λ λ · ∆ in . On the other hand, we alwaysreduce the value c ′ ( v, w ) on the output edge with the largest c ′ ( v, w ) value, until a totalreduction of − λ λ · ∆ in is obtained.Moreover, we have to apply a slightly different method when c ′ out ( v ) < − λ λ · ∆ in , i.e. itis not large enough to be decreased by the required amount. In this case, we choose ∆ out as large as possible (that is, equal to c ′ out ( v )), and define e ∆ in = ∆ in − λ +1 λ − · c ′ out ( v ), i.e. theportion of the input which we cannot compensate from the remaining outputs. Since thispart of the input conflicts is not used to create output conflicts, this can result in a highernumber of switches at v . Hence, we reduce s ′ ( v ) by a larger amount altogether. Specifically,we define ∆ s ( v ) := (cid:16) ∆ in − e ∆ in (cid:17) λ · deg( v ) + e ∆ in λ · deg( v ) . (5)Intuitively, the idea behind this technique is that even if inputs are used in the mostoptimal format, then 1 unit of input can correspond to at most − λ λ units of output at v .This is because condition 2 ensures c out ( v ) ≤ − λ · deg( v ) · s ( v ), and in case of the maximumpossible output, condition 1 gives c in ( v ) ≥ λ · deg( v ) · s ( v ) + 1 − λ · deg( v ) · s ( v ) = 1 + λ · deg( v ) · s ( v ) , providing a natural upper bound of λ − λ = λ − λ on the rate of inputs to outputs. Furthermore,in case of this input to output ratio, the total input of (at least) λ · deg( v ) · s ( v ) correspondsto s ( v ) switches, and thus each unit of input induces at most λ · deg( v ) switches in v . Onthe other hand, when there are no more outputs anymore, the number of inputs c in ( v ) canbe as low as λ · deg( v ) · s ( v ), and hence each unit of input induces at most λ · deg( v ) switchesin v .To sum it up formally, when processing the next node v , we do the following. If c ′ out ( v ) ≥ − λ λ · ∆ in , then we define ∆ s ( v ) according to Equation 4. We select a threshold value c thres ,and define ∆ c ( v, w ) on the output edges such that ∆ c ( v, w ) = 0 for output edges where c ′ ( v, w ) ≤ c thres , and ∆ c ( v, w ) = c ′ ( v, w ) − c thres for output edges where c ′ ( v, w ) > c thres .Since we can decrease c thres continuously, there exists exactly one threshold value whichensures that ∆ out = − λ λ · ∆ in . Hence, each output c ′ ( v, w ) is truncated to this thresholdvalue.Otherwise, if c ′ out ( v ) < − λ λ · ∆ in , then we assign ∆ c ( v, w ) := c ′ ( v, w ) to each outputedge ( v, w ) of v , calculate e ∆ in as discussed above, and define ∆ s ( v ) according to Equation5. A.4.2 CPS conditions after subtracting ∆ c and ∆ s ◮ Lemma 13.
The definitions of these modifications ensure that after reducing the numberof switches and conflicts, the resulting process still remains a CPS in each step.
Proof.
Recall that the conditions of a relaxed CPS require c ′ in ( v ) ≥ λ · deg( v ) · s ′ ( v ) + c ′ out ( v ), c ′ out ( v ) ≤ − λ · deg( v ) · s ′ ( v ), and c ′ ( v, w ) ≤ s ′ ( v ) for each output edge ( v, w )for node v . We show that these conditions still hold for the new functions c ′ and s ′ , obtainedafter subtracting ∆ c and ∆ s .First consider the case when there are still output c ′ ( v, w ) values to decrease. In condition1, the number of inputs decreases by ∆ in on the left side when executing the step. Thenumber of outputs decreases by − λ λ · ∆ in on the right side, and the first term on the rightis reduced by λ · deg( v ) · ∆ s ( v ) = λ · deg( v ) · ∆ in λ · deg( v ) = 2 λ λ · ∆ in . This adds up to a decrease of (cid:16) − λ λ + λ λ (cid:17) · ∆ in = ∆ in on the right side, thus condition 1remains true in this case.In condition 2, the left side is decreased by ∆ out = − λ λ · ∆ in , while the right side is alsodecreased by1 − λ · deg( v ) · ∆ s ( v ) = 1 − λ · deg( v ) · ∆ in λ · deg( v ) = 1 − λ λ · ∆ in in each step.To show that condition 3 remains true, we use the fact that c ′ ( v, w ) is always decreasedon the output edges with the highest c ′ ( v, w ) values. Assume that c ′ ( v, w ) > s ′ ( v ) onsome output edge ( v, w ), for the new functions c ′ and s ′ obtained after subtracting ∆ c and∆ s . Recall that with our truncation technique, if we have c ′ ( v, w ) on any edge after thereduction, then c thres ≥ c ′ ( v, w ). Together, this implies c thres > s ′ ( v ).Let s ′ prev ( v ) := s ′ ( v ) + ∆ s ( v ), the value of s ′ ( v ) before the decrease. Recall that by thedefinition of ∆ s ( v ), we have s ′ prev ( v ) − s ′ ( v ) = ∆ out · − λ · v ) , so for the difference between s ′ prev ( v ) and c thres , we have s ′ prev ( v ) − c thres < ∆ out · − λ · v ) . Note that this differenceis the maximum value of ∆ c ( v, w ) on any output edge, since before the decrease, all c ′ ( v, w )values were at most s ′ prev ( v ), and none of them were reduced below c thres . However, sincewe decrease the outputs by ∆ out in total, this means that we have to reduce (i.e., have anonzero ∆ c ( v, w )) on strictly more than∆ out ∆ out · − λ · v ) = 1 − λ · deg( v )distinct output edges. Each of these output edges is reduced to c thres , so the total sum ofoutputs after the decrease is at least c ′ out ( v ) ≥ − λ · deg( v ) · c thres > − λ · deg( v ) · s ′ ( v ) , which contradicts the already established condition 2. Thus condition 3 must also hold.Finally, consider the other case, when there are no more output values c ′ ( v, w ) to decrease.The left side of condition 1 is still reduced by ∆ in , and the right side consists of the firstterm only, which is reduced by λ · deg( v ) · ∆ s ( v ) = λ · deg( v ) · ∆ in λ · deg( v ) = ∆ in , so condition 1 remains true. In this case, conditions 2 and 3 hold trivially, since all outputedges ( v, w ) already have c ′ ( v, w ) = 0. ◭ A.4.3 Responsibilities of nodes
Consider any v a ∈ A throughout the process. The value s ′ ( v a ) is initially equal to s ( v a ),and then keeps being reduced until v a is the next node in the topological ordering (i.e.,when v = v a ). From this point, s ′ ( v a ) is not changed anymore; on the other hand, whenanalyzing the effects of v a , s ′ ( v ) values of other nodes are reduced, and we reassign theseswitches to be the responsibility of v a . That is, whenever having processed a node v , wedefine R ( v ) = s ′ ( v ) + P v ∈ A ∆ s ( v ) for the ∆ s function obtained in case of this specific v .Clearly, throughout the process, every decrease ∆ s happens with regard to a specific v , sothis is indeed a redistribution of the original s ( v ) values, and hence P v ∈ A s ( v ) = P v ∈ A R ( v )holds. ◮ Lemma 14.
For any v ∈ A and for the final s ′ ( v ) value, we have R ( v ) = O ( s ′ ( v )) . Proof.
Consider the round when v is the chosen node in the outer loop. As said above, s ′ ( v ) is not modified anymore after this round, so it already has its final value; also thevalue of R ( v ) is decided solely in this round.Since v ∈ A , we have deg( v ) < a . Hence, according to condition 2, c ′ out ( v ) =∆ out ( v ) < − λ · a · s ′ ( v ) at the beginning of this round. Note that at each node v reachable from v , we have ∆ out ( v ) ≤ − λ λ · ∆ in ( v ), and hence the total of amount ofchanges ∆ c decreases by a constant factor at each node v . Hence after processing all nodesup to a distance of at most d , the total amount of changes ∆ c on the edges is at most∆ out ( v ) · − λ λ + (cid:18) − λ λ (cid:19) + ... + (cid:18) − λ λ (cid:19) d ! . Since this is a geometric series with − λ λ <
1, the total amount of changes is at most∆ out ( v ) · ∞ X i =0 (cid:18) − λ λ (cid:19) i ≤ ∆ out ( v ) · − − λ λ = ∆ out ( v ) · λ · λ regardless of d , thus even when all the nodes reachable from v have been processed. Notethat at each node v , each unit of decrease in ∆ in ( v ) corresponds to either λ · v ) or λ · v ) decrease in ∆ s ( v ) (depending on whether v still has real output edges to decrease).Even if we take the larger decrease rate of λ · v ) , this means that the total amount ofchanges ∆ c can only produce a limited amount of total decrease ∆ s ; more specifically X v ∈ A ∆ s ( v ) ≤ ∆ out ( v ) · λ · λ · λ · v ) ≤ O (1) · ∆ out ( v ) a , using the fact that each v ∈ A has degree at least a . Thus using the upper bound ∆ out ( v ) ≤ − λ · a · s ′ ( v ), we get R ( v ) = s ′ ( v )+ X v ∈ A ∆ s ( v ) ≤ s ′ ( v )+ O (1) · − λ · a · s ′ ( v ) a = s ′ ( v ) · (1 + O (1)) = O ( s ′ ( v )) . ◭ Hence P v ∈ A s ( v ) = P v ∈ A R ( v ) = O ( P v ∈ A s ′ ( v )), so it suffices to upper bound the sumof the final s ′ ( v ) values in order to prove Lemma 8, as done in the second part of the proofin Section 5. B Discussion of lower bound proof
We now discuss the main challenges of designing a control gadget, and present some tech-niques that allow a control gadget design for a wide range of λ ∈ (0 , µ := λ + ϕ − ϕ for the input switching rate. B.1 Lower bound construction for λ = We first demonstrate the construction showing the tight lower bound for a specific λ value of . This choice of λ has a range of advantages: both f ( ) = and the optimal output ratio ϕ ∗ ( λ ) = are rational, the ratio of inputs to outputs − ϕϕ = 8 is an integer, and the numberof switches also increases by an integer factor µ = − ϕλ + ϕ = 2. Thanks to these properties, λ = allows a fairly simple control gadget design. A B C D A B C D A B C D A B C D A B C D A B C D A B C D A B C D Figure 3
Illustration of the connections within the control gadget of 16+16 nodes for λ = ,with simplified notation for complete bipartite subgraphs on 4+4 nodes. ◮ Lemma 15.
Consider majority/minority processes under Rule II with λ = . There existsa graph construction and initial coloring that has stabilization time e Ω( n / ) . As outlined in Section 6, our construction consists of L = log ( n ) levels, each of whichcontains Θ( n log n ) nodes. Each consecutive pair of levels forms a regular bipartite graph, with of the degree of the previous consecutive pair. Each node v has updegree deg( v ) anddowndegree deg( v ).E.g. in a majority process, in the initial state, of inputs will have the opposite coloras v , and all other neighbors will have the same color. Whenever µ = of the inputs (i.e., of the degree) switch to the opposite color, then of inputs will have the opposite color;as this is = λ of all neighbors, v can now switch. As a result, the lower neighbors of v will have a different color than v (i.e., a conflict is pushed down), and eventually thesenodes will follow v to the same new color. This results in a state again where of inputshave the opposite color as v , and the rest have the same.Note that between every two switches of v , exactly half of its upper neighbors switch, sothe number of switches for each node will always increase by a factor of 2 if we move a leveldown. This shows that each node in the bottom level switches 2 L = n times. Since thereare e Θ( n ) nodes on the bottom level, the already sum up to e Ω( n / ) switches, establishingthe lower bound.Two consecutive levels of the construction are connected through control gadgets. Acontrol gadget is a regular, bipartite gadget on k + k nodes for some constant k , i.e. a wayto connect two k -tuples of nodes on a consecutive pair of levels. The upper and lower k nodes of the gadget are in a 1-to-1 correspondence with each other. The goal of the gadget isto ensure that given some sequence of switches in the k -tuple, if we execute the the switches s times on the upper level, then this allows us to execute the same sequence of switches onlower k -tuple 2 s times. This allows for a recursive repetition of the same process, executedtwice as many times on each next level.We present such a control gadget on k = 16 nodes. For this, we take 4 groups A, B, C, D ,each containing 4 nodes; thus, our nodes will be elements of { A, B, C, D } × { , , , } . Eachlower level node labeled by number x will be connected to the group corresponding to the x th letter of the alphabet. E.g. nodes A B C D B B B B A1-2 B1-2 A1-2 B1-2 C1-2 D1-2 C1-2 D1-2 B2-3 B2-3 C2-3 D2-3 A2-3 C2-3 A3-4 B3-4 C3-4 D3-4 D2-3 A2-3 B4-1 C4-1 D4-1 A4-1A3-4 B3-4 A1-2 B1-2 C1-2 D1-2 C3-4 D3-4 B4-1 B2-3 C2-3 D2-3 A2-3 C4-1 A3-4 B3-4 C3-4 D3-4 D4-1 A4-1 B4-1 C4-1 D4-1 A4-1UpperlevelLowerlevelUpperlevelLowerlevel
Figure 4
Self-replicating sequence of switches on 16 nodes: while the upper level executes thesequence once, the lower level executes the same sequence twice. Arrows show that the lowernodes become switchable due to the switching of the specific upper nodes. occurs the same number of times in the sequence, and that between any two switches of alower node, exactly 2 of its 4 upper neighbors are switched, so no inputs are wasted indeed.(Note that the simpler sequence (12)(34) would also satisfy these properties, but it wouldnot allow us to assign colors to the nodes in a proper way.)Having designed this control gadget of constant size, each level will consist of Θ( n log n )distinct copies of this 16-node group { A, B, C, D } × { , , , } . We then start with constant-degree nodes on the lowermost level, and increase this degree by a factor of − ϕϕ = 8 on everynew level from bottom to top. To achieve this degree, we connect the lower level of a controlgadget to the upper level of not only one, but multiple control gadgets; e.g. the nodes A B C D B -labeled nodes of not only one, but multiple 16-nodegroups on the level above. This allows us to indeed increase the degree by a factor of 8 ateach new level. For example, if the node A A B C D x distinct 16-node groups on the level below (thus having a downdegree of4 x ), it will be connected to the nodes B B B B x distinct 16-node groups onthe level above (resulting in an updegree of 32 x ).Since all 16-node groups on the same level can execute the same steps in a parallelmanner, this allows us to produce the very same behavior as in the control gadget, but forhigh-degree nodes. With this technique, each consecutive pair of levels will form a regular(i.e., same-degree) bipartite graph, comprised of numerous copies of the control gadget as asubgraph.Given the construction for propagating conflicts appropriately, we can easily assign colorsto the nodes to obtain a majority or minority process. Observe that a constructions formajority and minority processes follow straightforwardly from each other: since our graphis bipartite, we can simply reverse the color of every node on every second level, directlyobtaining a minority example from a majority example, or vice versa. B.2 Generalization for other λ values The main idea for generalizing the construction, as already outlined in Section 6, is thefollowing. Given a control gadget of constant size, we can place Θ( n log n ) such gadgets oneach level, having L = − ϕϕ ) log( n ) levels altogether. We then begin with a constantdegree for each node on the lowermost level, and increase the degree by a factor of − ϕϕ oneach new level. In order to do this, we again connect the lower level of control gadgets tothe upper level of not only one, but multiple distinct control gadgets, as in the case of the λ = example. Thus consecutive pairs of levels form a regular bipartite graph, with thedegree rising exponentially as we move upward in the construction.The main challenge in the general construction is to design a control gadget of constantsize, i.e. to devise a way where the next level of nodes follows the exact some switching order,but with a schedule where the nodes switch an µ factor more frequently. However, whenthe input switching rate µ is not a rational number, then switching a µ portion of the upperneighborhood is of course not possible. Hence in this case, we can only approximate therate by a rational number pq ≈ µ , with p, q, ∈ Z . With the appropriate choice of p and q , wecan get arbitrarily close to the desired rate µ . We then develop the same construction andcontrol gadget for the input switching rate pq , which will yield almost the same amount oftotal switches: since f ( λ ) is continuous, a close enough pq approximation gives a constructionwith Θ( n f ( λ ) − ǫ ) switches for any ǫ > p + q is an even value; in case it is not, we caneasily achieve this by doubling the value of both p and q , using the approximation p q ≈ µ instead of pq . Note that in the the previous subsection where λ = implied µ = , we havealready done this essentially: while we could have switched 1 out of 2 upper neighbors in eachstep, we have in fact switched 2 out of 4 every time. This assumption is required because wewant nodes to be in conflict with p + q out of their q upper neighbors when switching, sincethis is the amount of upper neighbors that correspond to the switching threshold, namely p + q q · deg upper ( v ) = 12 · p + qq · (1 − ϕ ) · deg( v ) ≈ (cid:18) λ + ϕ − ϕ + 1 (cid:19) ·
12 (1 − ϕ ) · deg( v ) = λ + 12 · deg( v ) . Hence, p + q has to be an integer.In the following, in order to develop the required control gadget, we first generalizethe notion of control sequence for any ( p, q ) pair; this is essentially a balanced schedule ofswitching in the upper neighborhood which ensures wasteless conflict propagation, i.e. thatthe lower neighbor always switches when it is exactly on the threshold of switchability. Wethen discuss the main challenge in generalizing the control gadget used for λ = to other λ values.Furthermore, the construction also raises some minor technical questions relating todivisibility; we discuss these at the end of the section. B.3 Control sequences for general p and q Similarly to the µ = case, given p and q , we can develop a control sequence of numbers(1 , ..., q ), and switch the upper neighborhood of any node in our construction following thissequence. Let b = p − q . The first bracket of the control sequence contains numbers (1 , ..., p ),and for every next bracket, we shift the both the beginning and the end of the interval by b ; in general, the i th bracket consist of the numbers (( i − · b + 1), ..., (( i − · b + p ), alltaken modulo q to fall into the interval [1 , ..., q ].Initially, all nodes labeled 1 , ..., p and p + b + 1 , ..., q are black, and all nodes labeled p + 1 , ..., p + b are white. Then this sequence of steps ensures that in every odd step, allthe nodes in the next bracket of the control sequence are currently black, and in every evenstep, all the nodes in the next bracket are currently white. This means that after every odd(or even) step, p + bq of the upper neighborhood is white (or black, respectively). As p + bq = µ + 1 − µ µ λ − ϕ ) , and all output connections have a non-conflicting color before switching, this means that(1 − ϕ ) · λ − ϕ ) = λ of the entire neighborhood is in conflict with the node, so it is indeedprecisely on the threshold for switchability.For example, the control sequence for ( p , q )=(5,9) is(12345)(34567)(56789)(78912)(91234)(23456)(45678)(67891)(89123) , with nodes labeled 1-5 and 8-9 initially black and nodes labeled 6-7 initially white. Then inevery odd (even) bracket, the nodes that switch are always colored black (white) currently.To some extent, the same control sequence idea has already been applied in [28].Since b and q are relatively prime (as the greatest common divisor of p and q is either 1 or2), the sequence consists of q distinct brackets before periodically repeating itself. Note thatamong the nodes of a specific color, the next bracket always includes those that have occurredthe least amount of times so far (have the smallest occurrence number ). This ensures thatat any point in the sequence, the difference in the number of occurrences between any twonodes is at most 2. Whenever a specific node is absent from the sequence, it is always absentfor exactly 2 consecutive brackets. Each node 1 , ..., q appears the same number of times ( p times) before the sequence start repeating itself; hence, if the upper neighborhood of a node v follows this sequence, then v indeed switches qp = µ times more than its upper neighbors,and does not waste any input conflicts.Observe, however, that any node v connected to such an upper neighborhood has tobe of the same color to be switchable in all steps. I.e. in case of a majority process, v becomes white (black) after every odd step (even step, respectively), while in a minorityprocess, v becomes black (white) after every odd step (even step, respectively). Since wealso need nodes of both color on the next level, in practice, we have to take two copies ofour control gadgets; this produces twice as many nodes on each level, distributed equallyamong the two colors, which all switch at the same time if we proceed through the steps ofthe two control gadgets in a parallel manner. This technique of duplicating the controllinggadget has already been used and discussed in [28]. The duplication is a technical stepthat increases the size of each level by a factor of 2 only; hence in the following, we do notconsider the color of nodes, and instead focus on the main challenge, which is the design ofthe control gadget that is to be duplicated. B.4 From control sequence to control gadget
In our example for ( p , q )=(2,4), we created 4 groups ( A − D ) of 4 nodes each (1 − switching the group ; note that this does not mean switching all nodes in the group,but executing a step of the control sequence, i.e., switching µ portion of the group so that alllower neighbors of the group become switchable. Once all four groups have been switched,all 16 nodes on the lower level become switchable, so we can start (or continue) executingthe same process on the level-pair below.Note that on the upper level, each next step in a specific group always picks a prede-termined pair of nodes in the group (based on the control sequence), so in the upper level,it is enough to consider the order in which we select the groups: regardless of the actualnodes switched, the step always has the same effect, namely, it makes all nodes connectedto this group switchable. In contrast to this, on the lower level, all nodes labeled with the same number become switchable at the same time, as they have the same upper neighbors(a specific group); thus when discussing the switchability of lower-level nodes, we can simplyhandle the nodes labeled with the same number together. Thus we can illustrate the processin a simplified way in the following diagram (note that numbers within the brackets of thecontrol sequence are only reordered for better visibility). A B C D B C A D . ( 1 2 ) ( 3 2 ) ( 4 3 ) ( 1 4 )Note that when processing the second bracket, we need to switch group B for the secondtime. Before that, we first execute the first switching of group D , too, and then by reachingup to the level above the upper level, we make all four groups switchable for the second time(denoted by a dot in the figure), and then switch B for the second time. Note that this firstswitch of group D already makes the nodes labeled 4 switchable when processing the secondbracket. This is not a problem; since number 4 is not in the second bracket, we simply waitwith the switching of these nodes until we start processing the third bracket.Also note that we always ensure that the nodes of a specific bracket (e.g., nodes labeled3 and 4 in the previous example) are all switched at the same time. This is needed to carryour initial the assumption over to the level-pair below, namely that the upper groups allbecome switchable together at specific points, and we can switch them in any order of ourchoice.It is a natural idea to generalize this method for any ( p , q ) pair, by creating q differentgroups of q nodes each, and cross-connecting these q nodes in a similar fashion. However,it is not straightforward to apply the technique for any ( p , q ). Consider the control sequencefor ( p , q )=(3,5), and a similar construction of groups: A B C D E B C D A E C D B A E . . ( 1 2 3 ) ( 4 2 3 ) ( 5 4 3 ) ( 1 5 4 ) ( 2 1 5 )The problem in the above sequence is that by the third bracket, the number 3 has alreadyoccurred 3 times, so by the time we process this bracket, group C on the upper level hasto switch for the third time. Since each upper-level group becomes switchable at the sametime, this means that by this point, all groups A , ..., E now must be switchable for thethird time; in particular, group E too. That must mean that group E has already switchedat least twice previously; however, the third bracket contains the very first occurrence ofnumber 5, so at least for one of the two switches of group E , the nodes labeled ‘5’ on thelower level have wasted an opportunity to switch, so they could not switch a µ = factormore than their upper neighbors.Essentially, the problem with the sequence is that the third bracket contains both the j th occurrence of one number and the ( j + 2) th occurrence of another (numbers 5 and 3,respectively). Because of the ( j + 2) th occurrence of a number in the bracket, all groupshave to become switchable ( j + 2) times, and hence already be switched ( j + 1) times by thetime we reach this point. However, if nodes labeled with another number are only switchingat this point for the j th time, then one of the ( j + 1) switches of their control group hasnot been used. Generally, given groups X and Y , if there is a bracket in the sequence thatcontains the j th occurrence of the number corresponding to X and the ( j + 2) th occurrenceof the number corresponding to Y , then we say that X and Y are in contradiction with each other (in the given bracket). For ( p , q )=(3,5), C and E are in contradiction in the thirdbracket as discussed. For ( p , q )=(2,4), we can see that there is no contradiction between anytwo letters.Note that such contradictions are the only possible source of a problem; given a controlsequence with no contradiction, there always exists a valid switching sequence of the uppergroups. Since the control sequence itself guarantees that the occurrence numbers can neverdiffer by more than 2, the lack of contradictions ensures that the difference between occur-rences is at most 1 at any point. Hence whenever we require the ( j + 1) th switching of aspecific upper group, we can simply switch all upper groups that have not been switched forthe j th time yet; by this point, the lower neighbors of each such group have certainly beenswitched for the ( j − th time already, so we are indeed not wasting any switches. Thusour goal is to somehow avoid contradictions in the control sequence.Generally, devising a control gadget for any p and q is a challenging task. In the following,we present the technique of shifting , which allows us to considerably increase the number of( p , q ) pair for which we can devise a control gadget. We first illustrate the technique on theconcrete example of ( p , q )=(3,5). B.5 Subset shifting
In the above example of ( p , q )=(3,5), the only problem essentially was that the second in-stance of E always preceded the third A . However, the sequence ( ABCD.ABCDE.ABCDE.E )would, on the other hand, cause no problems at all.Therefore, the key idea is that we can simply skip the very first switching of the group E , and only switch the groups ABCD in this case. Then every further time when theupper groups become switchable, we do switch every group. Finally, when the upper groupsbecome switchable for the fourth time, we start by switching the group E . At this point,the sequence of switched blocks is exactly ( ABCD.ABCDE.ABCDE.E ), which will thenagain be followed by
ABCD when we also switch the other groups for the fourth time. Aconcatenation of such sequences yields a sequence where the group E is effectively in adifferent phase, delayed from the other groups by 1 round.Note that shifting E skips an opportunity to switch group E in the very first switchingof the upper groups, and also an opportunity to switch ABCD at the very last switchingof the upper groups. Hence, if the number of switches on a given level is s , then withthis technique, the number of switches on the next level will not be s · − ϕλ + ϕ , but only( s − · − ϕλ + ϕ = − ϕλ + ϕ · s − − ϕλ + ϕ . However, one can see that this only adds up to a loss of(an arbitrarily small) ǫ in the exponent of the number of switches: for any ǫ >
0, we canselect a constant s high enough such that − ϕλ + ϕ · s − − ϕλ + ϕ > s · ( − ϕλ + ϕ − ǫ ) (note that this isvery similar to the technique we used when relaxing the CPS definition; nodes that switchat most s times are essentially considered new base nodes). Then due to this inequality,the number of switches of each group on the lowermost level of our construction is stillΩ (cid:18) − ϕλ + ϕ − ǫ (cid:19) ( − ϕϕ ) · n ! = Ω n log ( − ϕλ + ϕ − ǫ ) log ( − ϕϕ ) · n = Ω (cid:16) n f ( λ ) − ǫ (cid:17) , for an arbitrarily small ǫ , as we are using ϕ = ϕ ∗ ( λ ), and f ( λ ) is continuous. Also, notethat since each such loss of ǫ in the exponent can be arbitrarily small, the different suchlosses in the exponent can be merged into one common ǫ in the final running time.Note that both in majority and minority, skipping the very first or very last switch ofa node does not create any problems colorwise. Skipping the last switching opportunity only results in ending up with the opposite color in the final state. For each node that issupposed to skip the first switching opportunity, we have to invert its original color, suchthat the nodes already start with the color they would acquire if group E was also switchedat the first opportunity. B.6 Shifting in general
Note, however, that this technique only allows us to shift a specific subset of the uppergroups by 1. A crucial property of shifting is that the subsets at the beginning and theend of our modified sequence (
ABCD and E , respectively) form a disjoint partitioning ofthe upper neighbor groups. If we were to use the sequence ( ABCD.ABCD.ABCDE.E.E ),then with the concatenation of such sequences, instead of skipping one switch altogether,the groups would skip a switch at every third opportunity. This would effectively reducethe number of switches on each next level to s · − ϕλ + ϕ · , which would have a major effect onstabilization time.This is also the reason why shifting does not provide a general solution for any ( p , q ) pair.Consider, for example, the control sequence for ( p , q )=(7,9), which looks as follows:(1234567)(2345678)(3456789)(4567891)(5678912)(6789123)(7891234)(8912345)(9123456)Here, the 3 rd bracket contains the 1 st occurrence of 9 and the 3 rd occurrence of 3, whilethe 6 th bracket contains the 4 th occurrence of 3 and the 6 th occurrence of 7. This impliesthat for a correct solution, the upper neighbors of 9 (i.e., group I ) should be shifted at least1 further than the upper neighbors of 3 (group C ), and the upper neighbors of 3 (group C )shifted at least one further than the upper neighbors of 7 (group G ). However, then group I is shifted at least 2 steps away from group G (i.e., must skip at least 2 initial rounds tobe sufficiently later than G ), which, as discussed above, is not viable.The main goal of shifting is to separate the groups that are in contradiction with eachother in a specific bracket. We say that a subset of letters (i.e., groups) is consistent ifthere is no two groups of the subset are in contradiction in any bracket. In general, shiftingprovides a solution for a ( p , q ) pair if the letters can be partitioned into two consistent subsets.We call these two subsets blocks , and we also refer to the partitioning as consistent if bothof its blocks are consistent. For ( p, q ) = (3 , A and E in different blocks.It depends on the concrete value of p and q whether a consistent partitioning (into twogroups) exists, i.e., whether the shifting technique provides a valid control gadget. In thefollowing section, we show that such a partitioning always exists if µ ≤ , that is, for λ lessthan approximately 0 . ◮ Lemma 16.
Under Rule II with λ < . , for any ǫ > , there exists a graph constructionand initial coloring where majority/minority processes stabilize in time Ω( n f ( λ ) − ǫ ) . While these µ ≤ values allow a relatively simple proof of consistency, these are notthe only µ values for which shifting provides a valid solution. For larger µ , however, theexistence of a consistent partitioning depends on multiple factors, including how large theintegers p and q are. For example, the case ( p, q ) = (5 ,
7) can also be partitioned consistently,and thus the shifting technique provides a valid construction for µ = . This correspondsto λ ≈ . . ◮ Lemma 17.
Under Rule II with λ ≈ . , for any ǫ > , there exists a graph constructionand initial coloring where majority/minority processes stabilize in time Ω( n f ( λ ) − ǫ ) . Thus in general, the concept of levels allows us to devise a construction idea to prove thelower bound for any λ value. However, to obtain an actual realization of such a constructionfor every λ ∈ (0 , λ values that are not covered by the shifting method. B.7 Consistent partitioning for µ ≤ We now discuss how to partition the upper groups into two consistent groups for any µ ≤ .Note that while our method shifts a block of groups on the upper level (e.g. groups A and B ), the consistency of this block depends on the groups’ lower neighbors (e.g., where nodeslabeled 1 and 2 appear in the control sequence below). Thus, for simplicity, we refer to eachgroup not by its letter, but by the number assigned to its neighbors on the level below, andour goal is to find a consistent partitioning of the numbers (1 , ..., q ) into 2 blocks.Recall that b = q − p , i.e. the number of different elements in two consecutive brackets ofthe control sequence. For now, let us first assume that p ≥ b .Furthermore, let us use B ℓ to denote a block formed from any ℓ consecutive numbers in(1 , ..., p ), i.e. containing (the letters for) the numbers i +1 , i +2 , ..., i + ℓ for some 0 ≤ i ≤ p − ℓ .Also, let B ′ b and B ′′ b denote the blocks formed from the numbers ( p + 1 , ..., p + b ) and( p + b + 1 , ..., q ), respectively; note that these both consist of b numbers indeed. ◮ Lemma 18.
Any block B b is consistent. Proof.
Note that a control sequence is developed as follows: there is a starting point h s andan endpoint h e , which are shifted in each step in a modular fashion (i.e., q is followed by 1again). Initially, h s and h e are at 1 and p + 1, respectively, so the first bracket of the controlsequence contains the numbers [ h s , h e ). In each step, both points are shifted further aheadby b (modulo q ). Since h e starts at p + 1, after two steps, h e will arrive at 1, and then followthe same pattern from here as h s from the beginning. Hence, the position of h e in the j th step is always the same as the position of h s in the ( j − th step.The initial bracket of the sequence contains all elements of B b . After some steps, wehave h s > i + 1 (for the first number i + 1 in the group); let h s denote the value of h s inthis step. This shows that in this step, only the numbers ( h s , ..., i + 2 b ) will be present inthe next bracket. Then in the following step, h s falls within the range of B b again, so onlythe numbers ( h s + b , ..., i + 2 b ) will be contained in the next bracket. The key observationis that in the step after this, h e will be equal to h s (it always takes the same position as h s did two rounds ago), hence the next bracket will contain the groups ( i + 1, ..., h s −
1) of B b , which is exactly the complement of groups two rounds ago. Similarly, the bracket ofthe next step contains ( i + 1, ..., h s + b − B will have occurred the same number of timesagain.Therefore, whenever we have brackets that only contain a subset of B b , they are alwaysorganized as follows. Before this point, each group in B b has the same occurrence number.Then the following two brackets contain some subsets S and S of B b , and after this, thenext two brackets contain exactly the complements of S and S . This pattern ensures thatregardless of the content of S and S , no bracket has a difference of 2 in occurrence numbers,and after the pattern, all groups have the same occurrence numbers again.It is worth pointing out that this heavily relies on the fact that the size of B b is atmost 2 b , and hence whenever h s or h e falls within the range of B b , it is guaranteed that italready surpasses the entire range of B b in the second step after this. For example, in case of ( p, q ) = (7 ,
9) shown above, the block (3 , , , ,
7) does not obey this property, since thestarting point falls into it in 4 consecutive rounds, and hence it is not consistent. ◭ Note that the same proof holds for any continuous block B within (1 , ..., p ) if it has sizeat most 2 b . Specifically, for the case of p < b , putting all of (1 , ..., p ) together still forms aconsistent block. ◮ Lemma 19.
Blocks B ′ b and B ′′ b are both consistent. Proof.
Blocks B ′ b and B ′′ b follow the same behavior as any block B b described in Lemma18, except for not being included in the first 1 and first 2 brackets, respectively. Hence, thesame reasoning shows that these blocks are also consistent. ◭ It remains to show that we can merge the blocks B ′ b and B ′′ b with the blocks in (1 , ..., p ) toobtain a consistent partitioning into two blocks for smaller µ values. For this, we introducesome new notation. Let us denote the block corresponding to numbers (1 , ..., b ) by B first b ,and the block corresponding to numbers ( p − b + 1 , ..., p ) by B last2 b . ◮ Lemma 20.
The block B last b ∪ B ′ b is consistent. Proof.
Our previous lemmas show that both B last2 b and B ′ b are consistent separately. To-gether, they form a block of 3 b consecutive numbers. Note that the only reason why theproof of Lemma 18 does not apply to blocks of length 3 b is that h s can fall within the rangeof the block on 3 consecutive occasions, and thus a bracket could simultaneously have the( j + 2) th occurrence of the last few numbers and the j th occurrence of the first few numbers.However, in our case, B ′ b is not contained in the first bracket ( h e = p + 1 initially), sothe occurrence number of all nodes in B ′ b is always smaller by 1 than the same occurrencenumbers in the B b case. Hence even if h s falls into the range of the block 3 consecutivetimes, the resulting bracket only contains the ( j + 1) th occurrence of the last nodes in B ′ b ,and the j th occurrence of the first nodes in B last2 b . ◭◮ Lemma 21.
The block B first b ∪ B ′′ b is consistent. Proof.
The first bracket of the control sequence contains all elements of B first b . The secondbracket contains none of the numbers in the merged block, while the third bracket onlycontains the elements of B ′′ b . Up to this point, all elements of the merged block appearexactly once. From here, the merged block simply behaves as any block B b in the proof ofLemma 18: it is a block of 2 b consecutive number, such that each have the same occurrencenumber in the beginning. ◭ Note that this already provides a construction proving Lemma 16. If µ ≤ , then p ≤ b ,so B first b and B last2 b together already cover all numbers in (1 , ..., p ). Thus the merged blocksin Lemmas 20 and 21 cover all upper groups, giving a consistent partitioning. Therefore,the shifting technique provides a valid control gadget if we shift all the upper groups in B first b ∪ B ′′ b by 1.On the other hand, for general ( p, q ) pairs with µ > , the groups corresponding to(1 , ..., p ) can not necessarily be partitioned into two consistent blocks, and thus we cannotobtain a valid control gadget with the shifting method, as in the example of ( p, q ) = (7 , µ values, when even p < b . However, for such small µ , the controlsequence is always guaranteed to be contradiction-free, so the shifting technique is not evenrequired to form a control gadget. Figure 5
Plot of b f ( λ ) and b ϕ ∗ ( λ ), besides f ( λ ) and ϕ ∗ ( λ ) B.8 An easier lower bound
We also briefly note that a simple technique allows us to show a slightly weaker lower boundin case of any λ , even without the shifting technique. Recall that the idea of upper groups(i.e., assigning a letter and a number to a node) allowed us to handle any case where theoccurrence numbers in any bracket of a control sequence differ by at most 1. Note that ina control sequence, the occurrence numbers in any bracket can differ by at most 2 in anycase, so increasing this limit by 1 more would already provide a control gadget for any λ .Consider the idea of placing a level of relay nodes between any two consecutive levelsof our construction, taking a mediator role between the two levels. While previously, thenodes labeled A in the upper level were connected to the nodes labeled 1 in the lower level,we now remove these edges, an instead connect all these nodes to a set of relay nodes R A/ inbetween. This extra level then allows us to temporarily store conflicts, and relay them tothe lower level in a timing of our choice, which is already enough to implement the controlsequence for any λ .The drawback of the technique, however, is that the relay nodes now also waste conflicts.While previously both the downdegree of the upper level and the updegree of the lower levelwas d , now in order to allow the relay nodes to be dominated by their upper neighbors, wenow must select the downdegree of the upper level and the updegree of R A/ to be d , andthen the downdegree of R A/ and the updegree of the lower level to be − λ λ · d . In practice,this means that every new level of the construction will imply an extra degree decrease factorof − λ λ .For every new level, the number of edges now decreases by ϕ − ϕ · − λ λ , so the optimalchoice of ϕ also changes. Hence this construction requires a new choice b ϕ ∗ of output rate,which will then, analogously to the original case, result in a stabilization time defined bythe function b f ( λ ) := max ϕ ∈ (0 , − λ ] log (cid:16) − ϕλ + ϕ (cid:17) log (cid:16) − ϕϕ · λ − λ (cid:17) . This alternative lower bound function is shown in Figure 5. While this lower bound doesleave some gap to the upper bound of O ( n f ( λ )+ ǫ ), it has the advantage of being easy toshow for any λ , without having to devise complicated control gadgets. ◮ Theorem 22.
Under Rule II with any λ ∈ (0 , , for any ǫ > , there exists a graph con-struction and initial coloring where majority/minority processes stabilize in time Ω( n b f ( λ ) − ǫ ) . B.9 Above the uppermost level
Furthermore, the uppermost level of the construction needs to be discussed separately, sincein order to make the construction behave as we described, we also have to ensure that thenodes of the uppermost level already execute the control sequence a constant s number oftimes.The reason why this is necessary is that on each level of the construction, we lose aconstant number of switches due to two different factors. On the one hand, recall that if weapply the subset shifting method, then this leaves exactly 1 switch of each node on each levelunused. On the other hand, if each node in the given level switches s times, the next levelcannot always switch s · − ϕλ + ϕ times if this expression is not an integer. In fact, if each nodeswitches t times in the control sequence of our control gadget (with t = O (1)), this allowsfor only (cid:4) st (cid:5) complete executions of the control sequence on the upper level, and hence only $ (cid:4) st (cid:5) · − ϕλ + ϕ t % complete executions of the control sequence on the lower level. Thus due to these two factors,the number of switches does not increase from s to s · − ϕλ + ϕ for each new level, but only to s · − ϕλ + ϕ − O (1) for some constant.As discussed already in Section B.5, we can overcome this by ensuring that the nodes ofeach level switch at least s times for a specific constant s , at the cost of losing a factor ǫ from the exponent of our lower bound. The smaller the ǫ loss we tolerate, the larger theminimal switches s we have to ensure for each (i.e., even the uppermost) level.There is a simple method to ensure that each node in the uppermost level of the con-struction switches s times, for any constant s . A similar technique was already used inthe weighted constructions of [28]. Since our control gadgets have constant size, there areat most constantly many different ‘type of’ nodes on the uppermost level. For all these sets V of uppermost level nodes (that have the same role in different control gadgets), we canconnect V to a group V ′ on an even higher pseudo-level, such that each edge between V and V ′ has a conflict initially. If nodes in V have a downdegree of d , then we connect eachnode in V to λ +1 λ − · d nodes in V ′ . This ensures that each node in V is switchable initially,while the extra nodes in V ′ and extra edges to V ′ still remain in the magnitude of | V | and | V | · d , respectively.We can then continue this in a similar fashion, and add another group V ′′ above V ′ ,connected with even more edges, in order to make V ′ initially switchable. After adding s such pseudo-levels above, and then unfolding them from bottom to top (i.e., first switching V , then V ′ and then V , then V ′′ and V ′ and then V , and so on), we obtain a way toswitch the nodes of V altogether s times, at a timing of our choice. Since s is a constant,executing this process for a specific V does not change the magnitude of nodes or edges inthe graph. As our control gadgets consist of constantly many nodes, adding distinct suchpseudo-levels for all the constantly many V sets still does not affect the magnitude of thenodes and edges. B.10 Divisibility challenges
Besides the difficulty of devising a control gadget for every λ , there is another problem toaddress in the construction.Assume that the input-output rate − ϕϕ can be expressed as (or, in the irrational case,approximated by) a rational number p ′ q ′ with p ′ , q ′ ∈ Z (note that this p ′ and q ′ has norelation to our choice of p and q , which are used to approximate µ ).This means that if a node in a specific level has downdegree d , then it has to haveupdegree p ′ q ′ · d for the optimal rate ϕ ∗ ( λ ). However, in our construction, that would implythat the level above has updegree (cid:16) p ′ q ′ (cid:17) · d , the following level (cid:16) p ′ q ′ (cid:17) · d , and so on. Inorder for all of these numbers to be integers, d would have to be divisible by q ′ many times(Θ(log n ) times). This is clearly not possible, especially for the lowermost levels, where d isa constant.We can overcome this problem by slightly modifying the number of nodes (i.e., thenumber of control gadgets) on each level. Let us select k ∈ Z such that p ′ q ′ ∈ [ k, k + 1) holds(note that ϕ ∗ ( λ ) < .
22 for any λ , and thus − ϕϕ > d . If the level above had the same numberof nodes, than that would imply a downdegree of d for each node above, and consequently,an updegree of p ′ q ′ · d . However, instead, we can increase the size of the level above by a factorof p ′ k · q ′ , resulting in a downdegree of only k · q ′ p ′ · d , and thus an updegree of k · q ′ p ′ · p ′ q ′ · d = k · d on the level above. Similarly, if we decrease the size of the next level by a factor of p ′ ( k +1) · q ′ ,then the next updegree ( k + 1) · d will similarly be an integer.The general idea is to follow this technique to ensure that the degree remains an integerafter each such level. Note, however, that in order not to change the construction signific-antly, we need to select a combination of k -s and ( k + 1)-s such that their product over all L levels is relatively close to (cid:16) p ′ q ′ (cid:17) L . In case of too many k -s, the uppermost level would besignificantly larger than the lowermost one, not giving us enough frequently-switching nodeson lower levels. In case of too many ( k + 1)-s, the degree of nodes would grow significantlyfaster than p ′ q ′ on a level, resulting in less than L levels altogether (since the degree on theuppermost level would have to be larger than Θ( n )). A possible solution is to select thelargest combination of k -s and ( k + 1) that is still below (cid:16) p ′ q ′ (cid:17) L , which is therefore at least kk +1 · (cid:16) p ′ q ′ (cid:17) L . This ensures that there is only at most a constant variance in level sizes, andthat the uppermost level has degree which is only a constant factor lower than it would bewith (cid:16) p ′ q ′ (cid:17) L .Note that our divisibility solution itself raises another minor divisibility problem: chan-ging the size of specific levels by a factor of p ′ k · q ′ or p ′ ( k +1) · q ′ might also mean that the followinglevel should have a non-integer number of control gadgets. However, we can easily overcomethis. For simplicity, let us analyze the process in the other direction, from uppermost tolowermost level. Whenever the level size change by the given factor would result in a non-integer number of control gadgets, we can simply round this number down, and connect thefew extra edges to a dummy gadget on the level below that we do not use. With possibly oneless actual control gadget, the number of nodes can only decrease by a constant on each newlevel, hence we only lose O (log( n )) nodes by the lowermost level. Since each level consistsof e Θ( n ) nodes, this does not affect the magnitude of nodes on any level. C Discussion of f ( λ ) We now discuss the functions f ( λ ) and ϕ ∗ ( λ ) in more detail. The diagram of both functionshave already been presented in the main part of the paper. This shows that both functionsare continuous and monotonously decreasing. The function f ( λ ) takes values in [0 , ϕ ∗ ( λ ) takes values between 0 and approximately 0.2178.Let us introduce the notation g ( λ, ϕ ) = log (cid:16) − ϕλ + ϕ (cid:17) log (cid:16) − ϕϕ (cid:17) . In order to find the optimal ϕ , one would have to differentiate g ( λ, ϕ ): g ′ ϕ ( λ, ϕ ) = ( λ + 1) · ϕ · log( − ϕϕ ) − ( λ + ϕ ) · log( − ϕλ + ϕ )( ϕ − · ϕ · ( λ + ϕ ) · log ( − ϕϕ ) . Thus at a local minimum, we have( λ + 1) · ϕ · log (cid:18) − ϕϕ (cid:19) = ( λ + ϕ ) · log (cid:18) − ϕλ + ϕ (cid:19) . In order to obtain ϕ ∗ ( λ ), we would have to solve this for ϕ , with λ as a parameter. To ourknowledge, there is no closed-form solution to this problem.Note that if we split the logarithms into subtractions, we also obtain an alternativeformulation of this equation.( λ + ϕ ) · log( λ + ϕ ) = ( λ + 1) · ϕ · log( ϕ ) + λ · (1 − ϕ ) · log(1 − ϕ ) . C.1 Lookup table of function values
Finally, we show the approximate values of f ( λ ) and ϕ ∗ ( λ ) for a wide range of λ valuesbetween 0 and 1. Besides, we also show the input switching rate µ = λ + ϕ ∗ ( λ )1 − ϕ ∗ ( λ ) for these λ values. The values are illustrated in Table 1. λ f ( λ ) ϕ ∗ ( λ ) µ ( λ )0.05 0.839 0.199 0.3110.10 0.709 0.181 0.3430.15 0.601 0.164 0.3760.20 0.512 0.149 0.4100.25 0.436 0.134 0.4430.30 0.371 0.120 0.4770.35 0.316 0.107 0.5120.40 0.268 0.095 0.5460.45 0.226 0.083 0.5810.50 0.189 0.072 0.6170.55 0.157 0.062 0.6530.60 0.129 0.053 0.6890.65 0.104 0.044 0.7260.70 0.082 0.036 0.7630.75 0.063 0.028 0.8000.80 0.046 0.021 0.8380.85 0.031 0.015 0.8770.90 0.018 0.009 0.9170.95 0.008 0.004 0.958 Table 1
Values of our functions for some specific λλ